The New Shape of the Engineering Team
If AI can handle the boilerplate, the scaffolding, and a growing portion of routine implementation, what does that mean for how we compose engineering teams? It’s a question I’ve been turning over for a while now, and the answers I keep arriving at are uncomfortable.
The optimistic version is that AI frees engineers to focus on higher-value work, system design, problem framing, user understanding, architectural thinking. The pessimistic version is that it eliminates the entry-level work that junior engineers have traditionally used to learn the craft. The realistic version is probably somewhere in between, and navigating it well is one of the most important challenges facing technical leaders right now.
The Junior Developer Question
This is the uncomfortable one. Historically, junior engineers learned by doing, writing CRUD endpoints, fixing bugs, implementing well-specified features. This work was valuable to the business and educational for the engineer. It was the apprenticeship model, and it worked.
If AI can do much of this work faster and cheaper, the business case for hiring juniors to do it weakens. Some organisations are already reducing junior hiring, reasoning that a smaller team of senior engineers augmented by AI can produce more than a larger team with a traditional junior-to-senior ratio.
The problem with this reasoning is that it optimises for the present at the expense of the future. If you stop hiring juniors, where do your future seniors come from? The industry has a pipeline problem that AI is accelerating, and the organisations that stop investing in junior development will find themselves competing for an increasingly scarce pool of experienced engineers in five to ten years.
The teams that thrive will be the ones that redefine what “junior” means rather than eliminating the role. Junior engineers in an AI-augmented world need different skills, less focus on writing boilerplate, more focus on evaluating AI output, understanding system design, debugging complex interactions, and developing the judgement that AI lacks. The apprenticeship model needs to evolve, not disappear.
The Hollowing Out Risk
There’s a related risk that I think about: the hollowing out of the middle. If AI handles routine implementation and senior engineers handle architecture and strategy, what happens to the mid-level engineers who traditionally bridged the gap?
Mid-level engineers are the backbone of most teams. They’re experienced enough to work independently, technical enough to make sound decisions, and close enough to the code to catch problems early. If AI compresses the skill spectrum, making juniors more capable and seniors more productive, the mid-level role could get squeezed from both sides.
I don’t think this will happen overnight, but I do think the skills that define a valuable mid-level engineer are shifting. Deep knowledge of a specific framework or language matters less when AI can generate code in any language. What matters more is the ability to evaluate, integrate, and maintain complex systems, skills that require judgement and context that AI doesn’t have.
New Roles and Skills
The teams I’ve seen adapt most effectively are the ones that have started thinking about new roles and skill emphases:
AI-output reviewers. Not a formal role, but a skill that every engineer needs. The ability to read AI-generated code critically, spot subtle errors, and evaluate whether the generated approach is appropriate for the context.
Prompt engineers / AI workflow designers. People who understand how to get the best results from AI tools, how to frame problems, how to provide context, how to iterate on outputs. This is a genuine skill that varies enormously between practitioners.
System integrators. As AI generates more components, the work of integrating them into coherent, maintainable systems becomes more important. This requires architectural thinking and a deep understanding of how pieces fit together.
Quality and reliability specialists. With more code being generated faster, the testing, monitoring, and reliability work becomes proportionally more important. Teams need people who are focused on ensuring that the increased velocity doesn’t come at the cost of stability.
What I’m Doing About It
I don’t have all the answers here, nobody does, because the landscape is shifting too fast. But I’ve started making some deliberate choices:
I’m still hiring juniors, but I’m changing what I hire for. Less emphasis on specific language proficiency, more emphasis on problem-solving ability, critical thinking, and the capacity to learn quickly. The specific technical skills will change; the ability to evaluate and adapt won’t.
I’m investing in my mid-level engineers’ architectural and system design skills, because those are the skills that will remain valuable regardless of how AI evolves.
I’m creating space for the whole team to experiment with AI tools and share what they learn, so that AI literacy becomes a team-wide capability rather than a specialist skill.
And I’m being honest with my team about the uncertainty. I don’t know exactly how this will play out. Nobody does. But I’d rather navigate it together, with open eyes, than pretend it isn’t happening.