February 17, 2026
|

Faster Pipelines, Emptier Benches

At my first real engineering job, at LogMeIn in Budapest, I had a mentor - let's call him James. He never told me the right answer. He'd look at my pull request, point at a function, and ask, "What happens when this gets called twice?" I'd stare at it. I'd realize I didn't know. I'd go figure it out, come back, and we'd talk about it.

That process—writing something bad, getting feedback, understanding why it was bad, writing it better—is how I learned to build software. Not from documentation or tutorials. From hundreds of small mistakes, corrected by people who cared enough to walk me through them. That foundation carried me through the next fifteen years.

I keep thinking about James when I think about what AI is doing to engineering teams. Because in 2026, a lot of the work I did as a junior—the boilerplate, the straightforward feature implementation, the well-scoped tickets—is exactly the kind of work that AI handles well. And if that work disappears, what happens to the next Gergely sitting at his first job, trying to build the intuition that no tool can give you?

What Happens to the Juniors

Senior engineers don't appear out of thin air. They're grown through exactly the kind of work that's being automated. If the industry collectively stops investing in entry-level talent, we'll face a shortage of experienced engineers in five to ten years that no amount of AI tooling can solve. The compounding effects—lost mentorship, lost knowledge transfer, lost career development—will catch up with us.

The answer isn't to pretend AI hasn't changed things. It's to redefine what entry-level work looks like. Instead of writing boilerplate, juniors should be learning to prompt effectively, review AI-generated code critically, and understand system design earlier in their careers.

But I want to be honest about a tension here. "Review AI-generated code critically" sounds great as a goal, but you can't effectively review what you don't deeply understand. There's a real risk of creating a generation of expert beginners—engineers who operate AI tools fluently but can't debug the underlying system when the AI fails.

And the AI will fail, in ways that require exactly the foundational knowledge juniors used to build by writing code themselves.

The Socratic Machine

Every AI coding tool today works the same way. You write a line, and Copilot finishes it. You highlight a broken function, and the agent rewrites it. You describe a feature, and you get a pull request. The entire interaction model is built around giving you the answer as fast as possible. This is fantastic for productivity. It's terrible for learning.

When an experienced engineer uses Claude Code to skip boilerplate they already understand, that's a genuine efficiency gain. When a junior accepts a fix without understanding why the original code was broken, that's a learning opportunity that just evaporated. The tool doesn't know the difference. It treats everyone the same way—here's the answer, move on.

What if it didn't have to? Instead of silently fixing your off-by-one error, the tool could ask: "This loop iterates one too many times. Can you see where the boundary condition goes wrong?" Instead of generating a complete error handling block, it could prompt: "What should happen if this API call times out? Think about the caller's perspective." The answer is still there if you need it—you can always toggle back to full assistance—but the default for engineers still building their intuition would be guided discovery rather than instant resolution. Less code generator, more James.

The current generation of AI coding assistants was built for senior engineers who need to move faster. The next generation should be built for everyone—including the people who still need to learn why the fast answer is the right one.

The Pilot Problem

The best analogy I can think of is how pilots still learn to fly manually before they learn autopilot. You need the deep understanding to know when the automation is wrong.

Mentorship in the AI era can't just be "teach juniors to use AI tools." It has to include deliberate practice on fundamentals—debugging without AI assistance, reading codebases manually, building mental models of how systems actually work.

There's a broader dimension here that the industry isn't talking about enough. If entry-level roles are automated away at scale, we don't just have a talent pipeline problem—we have a labor market problem. A lost generation of developers priced out before they ever get in.

Still Humans

I think about what James did for me at that first job. He didn't give me answers. He taught me how to think about problems. He gave me room to be wrong and then helped me understand why. No AI tool did that for me, and I'm not convinced one could have.

The tools have changed. The work has changed. But the thing that turns a junior engineer into a senior one—patient, human mentorship—hasn't. If we automate away the work without preserving that, we'll have faster pipelines and emptier benches.

If you're a manager, redesign what your less-experienced teammates they work on and invest in how they're mentored. The answers aren't only in the tooling. They're in your team.

As of the time of this writing, Claude Code offers custom output styles. The user can define their own output styles, that makes it possible to implement what I describe. Interested? Give it a try!

Did you like this article? Subscribe to get notified about new ones on engineering management, open-source and the web!
No spam. Ever.
Gergely Nemeth profile picture
Hi 👋
My name is Gergely, and this is where I write about engineering management and open-source.
👶 ☕️ 🚵 🥐 🏂 🏔 🐈 🏀 🌁
Recent