‘Tokenmaxxing’ is hindering developers' productivity more than they realize

‘Tokenmaxxing’ is hindering developers’ productivity more than they realize

There’s an age-old adage in management: What gets measured is essential. Generally, this means you obtain an increased amount of whatever you are assessing.

For decades, software engineers have debated productivity indicators, beginning with lines of code. However, as a new wave of AI coding agents produces more code than ever before, what managers should focus on measuring is increasingly ambiguous.

Substantial token allowances — essentially, the quantity of AI computational power a developer is permitted to use — have become a symbol of pride among developers in Silicon Valley, but this perspective on productivity is quite odd. Evaluating an input in the process is illogical when the primary concern is likely the output. It may be reasonable if the goal is to promote more AI utilization (or sell tokens), but not if the aim is to enhance efficiency.

Evaluate the findings from a new category of companies that operate within the “developer productivity insight” sector. They reveal that developers utilizing tools such as Claude Code, Cursor, and Codex produce significantly more accepted code than previously. Nevertheless, they also observe that engineers frequently need to revisit and revise that accepted code, undermining assertions of heightened productivity.

Alex Circei, Waydev’s CEO and founder, is creating an intelligence layer to monitor these trends; his company collaborates with 50 different clients employing over 10,000 software engineers. (Circei has previously contributed to TechCrunch, but this reporter had not encountered him prior.)

He mentions that engineering managers are experiencing code acceptance rates ranging from 80% to 90% — indicating the proportion of AI-generated code that developers approve and retain — but they overlook the revisions that engineers must undertake in the subsequent weeks, reducing the real-world acceptance rate to between 10% and 30% of the generated code.

The advent of AI coding instruments prompted Waydev, established in 2017 to deliver developer analytics, to completely overhaul its platform over the past six months in response to the surge of rapid coding tools. Currently, the firm is launching new tools that track the metadata produced by AI agents, providing analytics on code quality and cost to offer engineering managers deeper insights into both AI utilization and effectiveness.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

While analytics firms have a reason to spotlight the issues they discover, mounting evidence indicates that large organizations are still determining how to harness AI tools effectively. Noteworthy companies are taking notice — Atlassian purchased DX, another engineering intelligence startup, for $1 billion last year, aiming to assist its clients in understanding the return on investment concerning coding agents.

Data from across the sector presents a cohesive narrative: More code is produced, yet a significant portion fails to stick.

GitClear, another player in this field, published a report in January revealing that AI tools elevated productivity, but also demonstrated that its research indicated “regular AI users had an average code churn rate 9.4x higher than their non-AI peers”—an increase surpassing the productivity benefits yielded by the tools.

Faros AI, an engineering analytics platform, analyzed two years’ worth of customer data for its March 2026 report. The conclusion: code churn — the lines of code deleted compared to lines added — escalated by 861% with high AI usage.

Jellyfish, marketing itself as an intelligence platform for AI-enhanced engineering, gathered data on 7,548 engineers during the first quarter of 2026. The firm discovered that engineers with the most substantial token allowances submitted the highest number of pull requests (proposed modifications to a shared codebase), but the productivity gains did not scale proportionally. They achieved double the output at ten times the token expense. In essence, the tools are generating quantity, not quality.

Such statistics resonate with developers who find that code reviews and technical debt are accumulating, even as they enjoy the capabilities of the new tools. A common observation is the disparity between senior and junior engineers, with the latter approving significantly more AI-generated code and consequently facing a higher volume of rewriting.

Nevertheless, even as developers strive to grasp the workings of their agents, they do not foresee reverting to previous methods anytime soon.

“This marks a new era in software development, and adaptation is necessary; companies are compelled to adapt,” Circei stated to TechCrunch. “It is not a cycle that will simply pass.”

Hackers are exploiting unpatched vulnerabilities in Windows security to infiltrate organizations.

Hackers are exploiting unpatched vulnerabilities in Windows security to infiltrate organizations.

Over the past two weeks, hackers have infiltrated at least one organization by exploiting Windows vulnerabilities disclosed on the internet by a disgruntled security expert, as reported by a cybersecurity firm.

On Friday, Huntress, a cybersecurity firm, indicated in a series of posts on X that its analysts have observed hackers capitalizing on three Windows security vulnerabilities known as BlueHammer, UnDefend, and RedSun. 

It remains uncertain who the attack’s target is, as well as the identity of the hackers.

Out of the three vulnerabilities being exploited, BlueHammer is the sole issue that has been patched by Microsoft thus far. A remedy for BlueHammer was implemented earlier this week. 

The attackers seem to be taking advantage of the flaws by utilizing exploit code that the security researcher made public online. 

Earlier this month, a researcher named Chaotic Eclipse shared what they claimed was code to exploit an unpatched Windows vulnerability on their blog. The researcher hinted at a conflict with Microsoft as the reason behind releasing the code. 

“I wasn’t bluffing Microsoft and I’m doing it again,” they stated. “A big thanks to MSRC leadership for making this happen,” they added, referring to Microsoft’s Security Response Center, the division responsible for investigating cyberattacks and managing vulnerability reports.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Days later, Chaotic Eclipse released UnDefend, followed by RedSun earlier this week. The researcher posted exploit code for all three vulnerabilities on their GitHub page. 

These three vulnerabilities impact the Microsoft-developed antivirus, Windows Defender, enabling a hacker to obtain elevated or administrator access to a compromised Windows computer.

TechCrunch was unable to contact Chaotic Eclipse for a response.

In reply to a series of specific inquiries, Ben Hope, Microsoft’s communications director, stated that the company endorses “coordinated vulnerability disclosure, a widely recognized industry practice that ensures issues are thoroughly investigated and resolved before public announcement, benefiting both customer safety and the security research community.”

This situation exemplifies what the cybersecurity sector terms “full disclosure.” Researchers who uncover a flaw can notify the software creator to assist in rectifying the issue. Typically, the company acknowledges the report, and if the vulnerability is confirmed, they work on a patch. Often, a timeline is established between the company and researchers regarding when the researcher can publicly disclose their findings. 

At times, for various reasons, this communication fails, leading researchers to publicly reveal details about the vulnerability. In some instances, to validate the existence or seriousness of a flaw, researchers advance further and release “proof-of-concept” code capable of exploiting that vulnerability.

When this occurs, cybercriminals, state-sponsored hackers, and others can obtain the code and leverage it for their attacks, which forces cybersecurity defenders to rapidly address the repercussions. 

“With these being so readily accessible now, and already weaponized for simple use, for better or worse, I believe that ultimately puts us in another tug-of-war between defenders and cybercriminals,” stated John Hammond, one of the researchers at Huntress who has been monitoring the situation, to TechCrunch. 

“Circumstances like these compel us to race against our adversaries; defenders urgently attempt to safeguard against malicious actors who swiftly exploit these vulnerabilities… especially now as it is simply ready-made attacker tools,” Hammond remarked.