Skip to content

Operations

AI makes the work faster. It might also make us worse at it.

A new study on AI-assisted coding: juniors finish two minutes faster and score 17 points lower on understanding. The trade behind AI productivity claims.

3 min read
Jump to section
  1. 01What the study actually found
  2. 02The nuance the headline misses
  3. 03The counter-argument worth taking seriously
  4. 04What this looks like in IT
  5. 05The leadership question

AI makes you faster. It might also make you dumber.

That’s the uncomfortable read of a recent randomised study from Anthropic. Small sample, but the result is sharp enough to be worth thinking about, especially for anyone hiring, training, or managing technical staff in the next few years.

What the study actually found

The study ran 52 software engineers (most of them juniors) through the same coding tasks. One group used AI assistance. The other coded by hand.

The AI group finished about two minutes faster. That difference wasn’t statistically significant.

On the assessment afterwards, the AI group scored 50%. The hand-coding group scored 67%. A seventeen-point gap, or as the researchers put it, nearly two letter grades. The biggest single gap was on debugging questions: the ability to read code, understand why it doesn’t work, and fix it. The very skill you need on the day AI gets it wrong.

So: a tiny productivity gain, paid for with a substantial deficit in understanding. That’s the trade, in numbers, on the population most exposed to it.

The nuance the headline misses

The interesting result isn’t the average. It’s the spread. Inside the AI group, two distinct patterns of AI use produced very different outcomes:

  • AI delegation. “Write this function.” “Fix this bug.” Finished fastest. Learned the least.
  • Conceptual inquiry. “What does this approach do?” “Why does this pattern handle errors that way?” Produced the best learning outcomes, including better than hand-coding alone.

The tool isn’t the variable. The relationship with the tool is. People treating AI as a thinking partner finished with deeper understanding than they started with. People treating it as an output machine finished with code they couldn’t have written, couldn’t fully read, and couldn’t have fixed if it broke at 2am.

The counter-argument worth taking seriously

The standard objection: every productivity tool gets this complaint. HVAC made people worse at lighting fires. Tractors made farmers worse at hitching mule teams. Calculators made students worse at long division. The argument lands. Most of those complaints were technically correct, and society absorbed the trade because the gain was worth it.

What might be different here is what AI is replacing. The earlier examples replaced mechanical tasks. AI is starting to replace cognitive habits: reading, debugging, structuring thought, holding a system in your head. The skills that, when you lose them, you also lose the ability to notice when something is wrong.

A farmer with a tractor still knows when the field is uneven. A developer who only reviews AI output may not know when the AI is confidently wrong. That’s a different shape of problem.

What this looks like in IT

We hire for depth on our team. We train for it. Junior CCP technicians shadow seniors for months before they touch live client systems unsupervised, because the cost of a half-understood fix in a production environment is paid by the client, not us. The reason we don’t run a hire-fast, scale-fast model isn’t ideology. It’s that the alternative produces technicians who can ship, but can’t catch the failures.

This study lines up with what we’re already seeing across our peer set. The fastest-shipping junior is not always the safest pair of hands. The slowest-talking senior in the meeting is sometimes the one who spots the misconfiguration that would take the network down.

That isn’t a brag. It’s a statement about how the work actually gets done well, and what it costs to learn it properly.

The leadership question

If the next ten years of IT, software, security and operations are mostly built by AI-assisted juniors who skipped the depth phase, who’s going to catch the AI mistakes? Who’s going to architect the next generation of systems with a real model of how they work, instead of a vibes-based one?

We don’t have a confident answer to that. We do know what we’re doing inside our own walls. AI is a research tool and a sounding board for our team, not a substitute for the technician’s understanding of the client’s environment. The bar for what a CCP technician knows about a client’s systems hasn’t moved with the rise of AI, and it isn’t going to.

If you’re thinking about how AI is showing up in your team, and how to get the productivity without the depth tax, that’s a conversation we’d happily have. Get in touch.

Tags aiopinionproductivityexpertiseleadership
Share LinkedIn Email
See if we're a fit