
There’s an age-old adage in management: What gets measured is essential. Generally, this means you obtain an increased amount of whatever you are assessing.
For decades, software engineers have debated productivity indicators, beginning with lines of code. However, as a new wave of AI coding agents produces more code than ever before, what managers should focus on measuring is increasingly ambiguous.
Substantial token allowances — essentially, the quantity of AI computational power a developer is permitted to use — have become a symbol of pride among developers in Silicon Valley, but this perspective on productivity is quite odd. Evaluating an input in the process is illogical when the primary concern is likely the output. It may be reasonable if the goal is to promote more AI utilization (or sell tokens), but not if the aim is to enhance efficiency.
Evaluate the findings from a new category of companies that operate within the “developer productivity insight” sector. They reveal that developers utilizing tools such as Claude Code, Cursor, and Codex produce significantly more accepted code than previously. Nevertheless, they also observe that engineers frequently need to revisit and revise that accepted code, undermining assertions of heightened productivity.
Alex Circei, Waydev’s CEO and founder, is creating an intelligence layer to monitor these trends; his company collaborates with 50 different clients employing over 10,000 software engineers. (Circei has previously contributed to TechCrunch, but this reporter had not encountered him prior.)
He mentions that engineering managers are experiencing code acceptance rates ranging from 80% to 90% — indicating the proportion of AI-generated code that developers approve and retain — but they overlook the revisions that engineers must undertake in the subsequent weeks, reducing the real-world acceptance rate to between 10% and 30% of the generated code.
The advent of AI coding instruments prompted Waydev, established in 2017 to deliver developer analytics, to completely overhaul its platform over the past six months in response to the surge of rapid coding tools. Currently, the firm is launching new tools that track the metadata produced by AI agents, providing analytics on code quality and cost to offer engineering managers deeper insights into both AI utilization and effectiveness.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While analytics firms have a reason to spotlight the issues they discover, mounting evidence indicates that large organizations are still determining how to harness AI tools effectively. Noteworthy companies are taking notice — Atlassian purchased DX, another engineering intelligence startup, for $1 billion last year, aiming to assist its clients in understanding the return on investment concerning coding agents.
Data from across the sector presents a cohesive narrative: More code is produced, yet a significant portion fails to stick.
GitClear, another player in this field, published a report in January revealing that AI tools elevated productivity, but also demonstrated that its research indicated “regular AI users had an average code churn rate 9.4x higher than their non-AI peers”—an increase surpassing the productivity benefits yielded by the tools.
Faros AI, an engineering analytics platform, analyzed two years’ worth of customer data for its March 2026 report. The conclusion: code churn — the lines of code deleted compared to lines added — escalated by 861% with high AI usage.
Jellyfish, marketing itself as an intelligence platform for AI-enhanced engineering, gathered data on 7,548 engineers during the first quarter of 2026. The firm discovered that engineers with the most substantial token allowances submitted the highest number of pull requests (proposed modifications to a shared codebase), but the productivity gains did not scale proportionally. They achieved double the output at ten times the token expense. In essence, the tools are generating quantity, not quality.
Such statistics resonate with developers who find that code reviews and technical debt are accumulating, even as they enjoy the capabilities of the new tools. A common observation is the disparity between senior and junior engineers, with the latter approving significantly more AI-generated code and consequently facing a higher volume of rewriting.
Nevertheless, even as developers strive to grasp the workings of their agents, they do not foresee reverting to previous methods anytime soon.
“This marks a new era in software development, and adaptation is necessary; companies are compelled to adapt,” Circei stated to TechCrunch. “It is not a cycle that will simply pass.”



