How to Drive AI Productivity and Value for Engineering Teams
- HeavybitHeavybit
There’s a great deal of interest in whether AI can (or can’t) provide value by increasing the productivity of engineering teams. This interview with developer productivity expert Laura Tacho, CTO at DX and 10+ year veteran of software development, covers opportunities for engineering orgs to take advantage of AI tooling and actionable suggestions for AI startup founders to build stickier, more-successful products for engineering end-customers.

DX CTO Laura Tacho discusses developer productivity at LeadDev. Image courtesy LeadDev.
How Are Engineering Teams Using AI?
78% of surveyed companies use AI, though only 3% of engineering leaders report achieving “very high” productivity gains. The most widely-reported use case for AI is code assistants, including high-profile products like GitHub Copilot, Claude Code, Cursor, and Continue, but there are still adoption challenges that might not have anything to do with productivity.

More Resources on Dev Productivity & AI Adoption
- Article: Productivity and Mental Health Resources for Remote Teams
- Article: Enterprise AI Infrastructure: Compliance, Risks, Adoption
- Article: How AI Is Reshaping Enterprise Infrastructure
- Article: The Future of AI Code Generation
- Article: What to Know About Pricing Developer Tools

While coding may seem to make sense for AI use cases because of how familiar it is to devs, it may not be the best opportunity. Says Tacho, “We have to look across the whole software development lifecycle to see productivity gains. We did a study of 180+ companies, and we found that for developers using AI, the number one time-saving use case was stack trace analysis, not code generation.”
Why Don’t Engineers Adopt AI?
The CTO suggests that while the difference between inflated expectations and the disappointing reality of cleaning up after messy, nondeterministic AI is an adoption barrier, there are practical obstacles as well. “I would say training and enablement are probably the largest barrier.”
“It’s not intuitively obvious, even for teams with lots of enthusiasm and optimism, where the highest-value use cases are hiding for a particular organization. Some patterns need to be taught and coached, just like with any other tool. Companies that skip that step (and try to rely on the optimism and enthusiasm of an individual developer) may see some initial success, but then they're going to see that success taper off.”
What’s Really Missing: Targeted Goals & Outcomes
So yes, individual devs may not use the company Copilot account more frequently because they haven’t been drilled on prompting best practices. However, entire engineering organizations may not be getting as much as they could from AI tools because there’s a lack of strategy.
“I think where the rubber isn't meeting the road is figuring out: What are the business problems in the company that have the most to gain from AI usage?” The CTO clarifies: “There should be room for experimentation. There should be time to try something new and figure out: OK, what can we do with this tool? But when it's only that, and we don't have targeted goals and outcomes, then things can go off the rails.”
“The companies where I see the most adoption and the most durable benefit from AI are ones that came to AI with a specific problem. For example: Companies that needed to modernize legacy infrastructure, and hypothesized that AI would help them modernize legacy components much faster than they could have previously.”
“Organizations that want to see organizational benefit need to think about this on an organizational level, not just rely on individuals figuring out what to do.”
How Can Startups Make the Most of AI?
Tacho concedes that different organizations with different resources (and different amounts of runway) will need to take different approaches. “I think you need to look at it as a spectrum of targeted experiments vs. targeted use cases. That proportion needs to be adjusted based on your company’s size and stage.”
“For startups, those are the environments where the individual can have a huge impact. So it would make sense for people to be experimenting, to be rapidly figuring out where they can exploit the capabilities of AI for your business. But you should document that as you go and make sure it's not just one individual deciding to do one-off things. Build a playbook of use cases that aren’t ephemeral.”
The companies where I see the most adoption and the most durable benefit from AI are ones that came to AI with a specific problem." -Laura Tacho / CTO, DX
“Depending on the market pressures on your business, that's going to dictate how you think about a rollout. Obviously, for startups that rely heavily on individual rigor and urgency in order to get work done, the AI strategy is going to fit better with focus on individuals or small teams. Larger organizations that are dealing with decades’ worth of legacy software need to take a different approach.”
How Should Engineering Teams Adopt AI?
The CTO empathizes with AI startup founders, who face mounting pressure from a market that moves incredibly fast. “I feel like AI tooling is being commoditized at a rate I have never seen before. A few months ago, when people evaluated tools, they were looking at which model they could use. It was a differentiating feature. And now you can use whichever model you want. Sometimes it feels like things change hourly.”
However, the CTO notes the increasingly competitive nature of some AI product lines is at odds with the organizational benefit of keeping a unified stack, despite individual developers continuing to crave autonomy. “This is a tough one. The reality is that engineering leads feel that business pressure for vendor consolidation. No company wants to be forking over more money for an arbitrary number of vendors that all more or less solve the same problems.”
Tacho suggests that a viable alternative for startups is to offer developer stipends to allow them freedom of choice. “It’s definitely a reasonable alternative, especially in a startup environment that may be a bit more open with regards to governance, risk, and security. It’ll also depend on whether you work in consumer vs. another industry.”
The CTO notes that other companies do attempt to consolidate around single vendors, using targeted proof-of-concept campaigns and running bake-off comparisons between competing vendors when needed. “Some companies decide that since the market's so volatile that they’re going to trial something for six months and then figure out if things stabilize.”
“I think no matter what approach an organization takes, there are going to be shadow AI tools out there because these are tools that right now are not prohibitively expensive for developers to pay for by themselves, and that is the reality, and it happens everywhere.
How Can AI Startups Target Developers as Customers?
The CTO notes that while certain market factors appear to be changing, the top drivers for purchasing AI tools, or any tools, remain the same. “I just spoke to a bunch of other CTOs, and while cost was not as much of a factor in purchasing decisions last year, it definitely is this year.”
“But accuracy and security are still top drivers for purchasing decisions. I think there's some table-stakes aspects for any AI tool that engineering teams have come to expect. Beyond the table stakes, the more that tools can illustrate better ROI and costs, the better.” Tools that can make usage metrics and data readily available will play nicer with developer productivity tooling to paint a clearer picture of ROI.
“For a long time, the primary metrics we had were acceptance rate, number of licenses, and how much was being paid for them. But there wasn’t much transparency when it came to how tools were actually being used. So focusing on a really robust admin API is a great way to ensure that you can build a defensible ROI case and get people to keep their wallets open.”
Opportunities for AI Products Targeting Engineering Teams
“As mentioned, areas like general-purpose code authoring tools may be reaching a level of saturation. But beyond code generation, there are opportunities to target other use cases across the software development lifecycle: Requirements authoring, code review, or changelogs. For AI products targeting engineering teams, the most valuable use cases are the ones that are under-solved right now.”
Of course, many teams are still concerned about the non-deterministic nature of GenAI and the havoc that a hallucinating LLM might have on their codebase. However, the CTO notes that organizations that have built-in organizational resiliency tend to be the ones that make the most of AI.
“When you introduce AI, in all its non-deterministic glory [to resilient orgs], they're not falling over. They're not bottlenecked. Whereas companies who don't [have organizational resilience] and think that AI will be a magic solution to all the problems they've been avoiding? Those companies are getting a reality check. The fundamentals don't go away just because you have AI.”
Beyond code generation, there are opportunities to target other use cases across the software development lifecycle. For AI products targeting engineering teams, the most valuable use cases are the ones that are under-solved right now."
“I think developer experience is not an outcome of using AI tools really well. It's a prerequisite to use them well. What's good for an individual human developer is also good for an agent. There's extra stuff that you might want to do for agents, but the physics of what makes software or codebases easy to contribute to, easy to change...that's important for AI agents and for humans.
“AI is an amplifier. If you have parts of your system that are a bit garbage, they're going to be amplified garbage now. And if you have really solid engineering practices that have stayed ahead of the industry curve, then that's going to be amplified and you're going to get even better results.”
For AI startup founders specifically targeting engineering orgs, the slickest user interfaces and best new user experiences will only go so far. If those engineering orgs that AI themselves are struggling to maintain stable, resilient systems, they may struggle to succeed with any AI product.
“To those AI product builders who are concerned about whether their product will be sticky with customers, I might say: Maybe think about building tools that fix your customers’ rickety systems. Probably a less exciting problem to solve, but people have their wallets open for it right now.”
Content from the Library
MLOps vs. Eng: Misaligned Incentives and Failure to Launch?
Failure to Launch: The Challenges of Getting ML Models into Prod Machine learning is a subset of AI–the practice of using...
How It's Tested Ep. #9, Leading Engineering Teams with Dave Lewis of Mobot
In episode 9 of How It’s Tested, Eden Full Goh speaks with Dave Lewis of Mobot. This talk spotlights Dave’s experiences leading...
Getting There Ep. #5, The State of SRE and Beyond
In episode 5 of Getting There, Nora and Niall meet for a conversation at SREcon. This talk explores the history of the...