Leading Engineering Teams in the AI Era: A Manager's Playbook
A lot has changed in engineering in the last six months, and most of it has caught managers off guard. Some engineers on your team are leaning hard on AI. Some are cautious. A few are quietly worried about their jobs. And you have to make sense of all of it without a clear playbook.
I have lived through a few of these shifts: cloud, mobile-first, remote. AI feels different in pace and scope, but the management instincts that work are mostly the same: be clear, be honest, set norms, pay attention to your people. Here is what I have seen working.
Set Adoption Norms Before You Set Productivity Goals
The most common mistake I see is managers asking "are we using AI enough?" before agreeing with the team on what *good* AI use looks like.
Without explicit norms, every engineer operates on their own assumptions: one shipping AI-generated code with minimal review, another hand-writing everything, a third quietly using a personal tool. None of them are wrong, exactly. They just have no shared expectations.
Get the team aligned on a few things: which tools are approved, what work AI is encouraged for, what needs extra care (security-sensitive code, customer data, novel architecture), the level of human review expected, and what gets disclosed in pull requests. You do not need a 10-page policy. A short written set of norms that everyone has read and pushed back on is enough. The point is removing ambiguity, not adding bureaucracy.
Rethink What Productivity Means
Lines of code, pull request count, and ticket throughput were never great metrics. AI makes them mostly useless. What still matters is outcomes: did the team ship things that mattered, are the systems reliable, is the codebase healthier than it was six months ago?
The hard part is that AI can make a team look more productive in the short term while creating debt that surfaces later. I have seen teams ship features fast in one quarter and then spend most of another quarter fixing AI-generated code that nobody fully understood. The work looked great on the dashboard. The reality was different.
So look past velocity. Ask whether the decisions were sound, whether engineers are learning or just shipping, whether someone else can maintain this code in a year. These are the questions that show whether AI is helping the team grow stronger, not just ship faster.
Coach Engineers Who Lean Too Hard on AI
Some engineers on your team use AI for almost everything, and that can be fine — they are often your most productive contributors. Pay attention to the ones, especially earlier-career, who reach for AI before they have understood the problem. When an engineer cannot explain why their code works or what trade-offs the AI made on their behalf, they are losing the reps they need to grow.
This is a coaching conversation, not a policing one. Sit with them. Ask them to walk you through a recent piece of work. Notice where they hesitate, and talk about it: "I want you to use AI as much as you want, but I also want you to be able to defend every line." Most engineers respond well when the goal is framed as their growth, not your control.
Listen to the Skeptics
The flip side: you will have engineers, often your most experienced ones, who are skeptical of how heavily the team is leaning on AI. They are not refusing to use it. By now, almost everyone has integrated it into some part of their workflow. Their concerns are usually deeper. That the team is reaching for AI in places where careful thought would serve better. That the speed gains in writing are being given back in debugging and review. That earlier-career engineers are losing the reps they need to grow. These critiques are often right.
The opportunity here is not to push skeptics toward more AI use. It is to take their concerns seriously and bring them into the conversation about *where* the team should and should not lean on AI. A skeptical senior engineer who is given real influence over your team's adoption norms can become one of your most reliable signals for whether things are going well or quietly going sideways.
Have the honest conversation. Ask what they are seeing that worries them. Be willing to act on it. The most thoughtful skeptics on your team usually have the clearest view of long-term risk, and ignoring them is how teams end up in trouble a quarter or two later.
Hiring Has Changed. Adjust.
If you are hiring, the candidate signals you used to rely on are noisier than they were. Take-home assignments are hard to evaluate. Coding rounds with screen-sharing are different now that AI is part of the toolkit.
A few things I have started doing differently: weighting system design and architecture conversations more (harder to outsource to AI live, and they reveal whether someone actually thinks like an engineer), asking more "explain this code" and "what would you change about this design" questions, and being more interested in how candidates *use* AI than whether they avoid it. I will sometimes invite candidates to use AI in a round and ask them to walk me through their reasoning. The ones who can explain why they accepted or rejected the AI's suggestions are the ones I want on my team.
Have the Honest Career Conversation
Some of your engineers are worried. They will not always say so directly. A senior engineer who has built their identity around being "the person who can write tricky code" is going to feel destabilized when AI can produce a credible first draft in seconds.
Do not pretend the worry is unfounded. The work is changing. Some of what your engineers built their careers on will be commoditized. Other parts will be more valuable than ever: judgment, design, communication, taste, leadership, and accountability for owning a problem end to end, not just handing off the code.
The mindset shift I would push hardest on: engineers who think like solid product managers will win this era. AI can write code from a clear specification. It cannot figure out which problem is worth solving or what "done" really means. Engineers who can sit with ambiguity, reason about users, and define the right thing to build before touching the keyboard will be in a different category of demand than engineers who only execute tickets.
The most useful thing you can do as a manager is help each engineer think clearly about where they are accumulating durable value. Not in the abstract. In the specifics of their actual work and career goals. That is a real 1:1 conversation, and it is the kind of work that AI cannot do for them or for you.
Lead Yourself, Too
Last thing. The engineers on your team are watching you. If you are using AI thoughtfully in your own work (drafting docs, summarizing meetings, exploring ideas), they will model that. If you avoid it because it makes you uncomfortable, they will read the signal. If you use it sloppily, they will model that too.
You do not need to be the best AI user on your team. You do need to be visibly engaged with the shift, and honest about what you are figuring out. The teams I see thriving right now are the ones whose managers are learning out loud.
The technology is changing fast. The job of leading people through change is the same as it ever was. Be clear, be honest, set norms, pay attention. The teams that thrive will be the ones whose managers do those things consistently.
About Me
Nimesh Patel is an engineering leader and career coach with over 20 years of experience building cloud-native enterprise and consumer software systems in Big Tech (including Google) and high-growth AI startups. He has led globally distributed engineering organizations of 60+ engineers and leaders, conducted 650+ interviews across engineering, management, and executive roles, made 50+ hires, and coached and promoted 30+ engineers and leaders. He provides interview and career coaching through ScaleYourCareer. Follow him on LinkedIn.
*Ready to accelerate your interview preparation or grow into your next role? Explore the coaching programs to find the right fit.*