TechCrunch Mobility: Uber omnipresent, simultaneously

TechCrunch Mobility: Uber omnipresent, simultaneously

Welcome once again to TechCrunch Mobility, your primary source for updates and perspectives on the forthcoming landscape of transportation. To receive this directly to your email, subscribe here for free — simply click TechCrunch Mobility!

If you haven’t been observing, Uber appears to be everywhere, particularly in the realm of autonomous vehicles. The firm offloaded Uber ATG, its internal unit for autonomous vehicle development, in 2020. Uber divested several of its ambitious ventures — while still holding an equity interest in all of them — to sharpen its focus on its essential services of delivery and ride-hailing. 

Nonetheless, Uber has not entirely abandoned AVs. Over the last two years, it has secured partnerships with numerous companies specializing in autonomous vehicle technology in areas like delivery, drones, trucking, and robotaxis. Its approach has been global, striking deals with Chinese firms to introduce robotaxis in Europe and the Middle East, along with collaborations with startups like the U.K.-based Wayve. 

Now, there’s a new partnership with Rivian. The gist of the agreement is that Uber will make an initial investment of $300 million in Rivian and will acquire 10,000 fully autonomous R2 robotaxis in anticipation of a rollout in San Francisco and Miami by 2028. Uber has the option to purchase an additional 40,000 starting in 2030. This fleet will be utilized exclusively within Uber’s platform. 

Here’s my perspective on this agreement. Although the total value of the deal could reach $1.25 billion, Uber’s initial capital requirement is relatively minor. Plus, the risk is predominantly on Rivian. This arrangement is also unique as Uber is both the developer of the self-driving technology and the manufacturer of the vehicle. 

Rivian has yet to commence production of the R2 SUV, nor has it tested or deployed a self-driving system tailored for robotaxis. Raising the bar even higher, this robotaxi is intended to be built in Rivian’s Georgia facility, which remains under construction. 

Moreover, the EV manufacturer has already made at least one concession in pursuit of this goal. Rivian has indicated that it no longer anticipates achieving its profitability target in 2027, mainly due to the considerable financial resources it is investing in its autonomy initiatives.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In our newsletter, we conducted a survey asking if the risks for Rivian are too significant. Subscribe here to receive Mobility in your inbox and make your opinion known in our surveys!

A little bird

blinky cat bird green
Image Credits:Bryce Durbin

On the topic of Uber, a little bird revealed that the ride-hailing giant may have been negotiating with Rivian regarding its robotaxi deal for quite some time. One individual with direct knowledge of both firms indicated that such a deal wouldn’t materialize instantly. When I sought further details, I was met with a question: “Does RJ strike you as someone with such a short strategic viewpoint?” Touché!

Have a tip for us? Reach out to Kirsten Korosec at [email protected] or via Signal at kkorosec.07, or email Sean O’Kane at [email protected]

Deals!

money the station
Image Credits:Bryce Durbin

Similar to Uber, Nvidia is omnipresent. Or at least aims to be. The firm has made countless investments — be it direct cash infusions or in-kind chip agreements — in autonomous vehicle technology firms. Additionally, it is establishing partnerships with automotive manufacturers — as demonstrated this week during its GTC conference — to promote its autonomous vehicle development platform known as Nvidia Drive Hyperion. 

Nvidia CEO Jensen Huang unveiled new or expanded partnerships with BYD, Geely, Hyundai, and Nissan for its AV development platform during his GTC keynote. GM, Mercedes-Benz, and Toyota have previously signed agreements with Nvidia to utilize the platform. 

Nvidia has been forming partnerships with auto manufacturers for years, but the urgency and detail surrounding AVs is noteworthy.  

“We have entered the ChatGPT moment for self-driving vehicles. We now realize that independent vehicle operation is achievable,” Huang stated during his GTC address, highlighting that collectively, the four manufacturers produce 18 million cars annually.

Other notable deals …

Advanced Navigation, an Australian startup focused on navigation and autonomous solutions, secured $110 million in a Series C investment round led by Airtree Ventures, with strategic contributions from Quadrant Private Equity and the National Reconstruction Fund Corporation (NRFC).

Arc Boat Company, the electric boat startup from Los Angeles, raised $50 million in a Series C funding round from Eclipse, a16z, Menlo Ventures, Lowercarbon Capital, Necessary Ventures, and Offline Ventures.

BusRight, the school bus routing and technology startup, obtained over $30 million in a round led by Volition Capital.

Jeff Bezos is reportedly seeking to raise $100 billion for a new investment fund focused on acquiring businesses in significant industrial sectors — such as automotive and aerospace. The aim is to modernize these businesses utilizing AI techniques developed by Bezos’ new endeavor, Project Prometheus. 

Rivr, a Zurich-based startup known for its autonomous delivery robot that can climb stairs, has been acquired by Amazon. The specifics of the deal remain undisclosed.

Trevor Milton, the founder of the now-defunct electric truck startup Nikola who received a pardon from President Trump, is attempting to raise $1 billion for AI-enhanced aircraft. 

Zenobē Energy has acquired Revolv, a San Francisco-based fleet charging startup, for an undisclosed sum. 

Notable reads and other tidbits

Image Credits:Bryce Durbin

A cyberattack targeting U.S. vehicle breathalyzer company Intoxalock has left drivers nationwide stranded and unable to start their cars.

Kodiak has expanded its commercial autonomous freight services to include the Dallas-El Paso corridor. This marks the company’s second major route and is fundamental to its network expansion strategy, as stated by COO Michael Wiesinger.

The National Highway Traffic Safety Administration has intensified its investigation into the effectiveness of Tesla’s Full Self-Driving (Supervised) software under low-visibility circumstances. The inquiry has now reached an “engineering analysis,” representing the highest level of scrutiny and a necessary step prior to the agency advising a company to initiate a recall. 

One more thing …

Image Credits:Jay Janner / The Austin American-Statesman / Getty Images

In last week’s edition, I mentioned to keep an eye out for my discussion with Rivian founder and CEO RJ Scaringe. We delved into various topics, and I found his insights on robotics particularly engaging. In summary, Scaringe believes that companies are mishandling industrial robotics. His new venture, Mind Robotics, aims to adopt a different approach, emphasizing robotic hands while avoiding the production of robots capable of performing backflips. 

As Scaringe shared: “What’s often overlooked in industrial [robotics] — and this is one key point we recognize clearly — is that the work is executed by the hands. Thus, hands are incredibly vital. From the perspective of a robotic system, everything else serves the purpose of positioning the hands correctly. Consequently, the robots’ ability to execute highly intricate maneuvers, such as backflips, only indicates that they contain unnecessary complexity for most tasks.” You can read the full interview here.

Delve charged with deceiving clients through ‘false compliance’

Delve charged with deceiving clients through ‘false compliance’

A Substack post released this week anonymously accuses the compliance startup Delve of “misleadingly” assuring “hundreds of customers they were compliant” with privacy and security regulations, which could lead to “criminal liability under HIPAA and significant fines under GDPR.”

Delve is a startup backed by Y Combinator that last year declared it had raised a $32 million Series A at a $300 million valuation. (The funding round was led by Insight Partners.) On Friday, the startup made efforts to counter the allegations on its blog, labeling the Substack post as “misleading” and asserting it “contains several inaccurate assertions.”

The Substack post is attributed to “DeepDelver,” who identified themselves as an employee of a (now former) client of Delve. When responding to emailed inquiries from TechCrunch, DeepDelver stated that they and their associates “decided to stay anonymous due to concerns of retaliation from Delve.”

In their narrative, DeepDelver recalled receiving an email in December alleging that the startup had “shared a spreadsheet containing confidential client documents.” While Delve CEO Karun Kaushik reportedly reassured customers in a follow-up email that they were compliant and that no outside party accessed sensitive information, DeepDelver indicated that they and other clients had grown wary.

“Having a common experience of feeling disappointed with the Delve interaction and sensing something suspicious, we decided to collaborate and investigate collectively,” they stated.

Their finding? That Delve “claims to be the quickest platform by fabricating evidence, generating auditor conclusions on behalf of certification companies that rubber stamp reports, and bypassing significant framework prerequisites while assuring clients they’ve attained 100% compliance.”

DeepDelver elaborated on these claims, alleging that the startup provided clients with “fake documentation of board meetings, tests, and processes that never took place,” then compelling those clients to “choose between using fake documentation or conducting mostly manual tasks with minimal genuine automation or AI.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

DeepDelver also asserted that nearly all of Delve’s clients appear to have passed through two auditing firms, Accorp and Gradient, which they referred to as “part of the same operation,” primarily functioning in India, with only a nominal presence in the U.S.

These firms, they claimed, merely rubber-stamp reports produced by Delve. Consequently, DeepDelver stated the startup “reverses” the conventional compliance structure: “By creating auditor conclusions, test processes, and final reports before any independent evaluation takes place, Delve positions itself as both the implementer and examiner. This is not a minor detail. It represents a structural fraud that nullifies the entire attestation.”

Apart from accusing Delve of misleading its clients, DeepDelver indicated that the startup is enabling those clients to “mislead the public by maintaining trust pages that include security measures that were never enacted.” 

DeepDelver mentioned that while their organization was voicing concerns about Delve, the startup “sent us numerous boxes of donuts […] to keep us satisfied.” Nevertheless, DeepDelver’s employer reportedly unpublished its trust page and has ceased relying on the startup for compliance.

In response to the allegations, Delve stated that it does not produce compliance reports at all. Instead, it operates as an “automation platform” that aggregates compliance information and provides auditors with access to that data.

“Final reports and opinions are issued exclusively by independent, licensed auditors, not Delve,” the company asserted.

Delve further indicated that its clients “can select to partner with an auditor of their preference or opt to work with one from Delve’s network of independent, accredited third-party audit firms.” Those auditors, the startup noted, are “established firms widely recognized across the industry, including by other compliance platforms.”

In refuting the allegation of providing clients with “fake evidence,” Delve responded that it is merely offering “templates to assist teams in documenting their processes in line with compliance requirements, as do other compliance providers.”

“Draft templates differ from ‘pre-filled evidence,’” the company stated.

Delve added that it is “actively examining any leaks” and is “continuing to review the Substack.”

When asked about Delve’s rebuttal, DeepDelver expressed to TechCrunch that they were “confounded by the sloppiness, awkwardness, and boldness of it.”

“They are trying to slither out [of] accountability by denying they have ‘pre-filled evidence’ but labeling it as ‘templates’ instead, effectively placing the responsibility on clients for adopting the ‘templates’ as is,” DeepDelver stated. “They’re asserting that they are not responsible for ‘issuing’ the report, which is easy to claim if you interpret issuing a report as providing the final endorsement.”

They added that there are “several very serious allegations” that Delve completely failed to address: “The India claim, the absence of AI (they only reference ‘automations’), and the trust (lol) page featuring controls that were never implemented.”

Evidently, DeepDelver is not finished with its critique, as it promised, “Part II will follow shortly.”

Additionally, following the initial Substack post, a user named James Zhou on X stated they managed to access sensitive details from Delve, such as employee background checks and equity vesting schedules. Dvuln founder Jamieson O’Reilly shared further insights from what O’Reilly described as a discussion with Zhou about “multiple glaring security vulnerabilities in Delve’s external attack surface.”

TechCrunch reached out via email for additional comments to the media contact provided on Delve’s website. The email was undeliverable, but after this article was released, I received a calendar invitation for a “Delve demonstration” set for later this week.

This article was initially published on March 21, 2026. It has been updated with emailed responses from DeepDelver, additional information regarding alleged security vulnerabilities provided by Jamieson O’Reilly, and further details about Delve’s reaction to TechCrunch.

A private visit to Amazon’s Trainium laboratory, the processor that has impressed Anthropic, OpenAI, and even Apple.

A private visit to Amazon’s Trainium laboratory, the processor that has impressed Anthropic, OpenAI, and even Apple.

Soon after Amazon CEO Andy Jassy unveiled AWS’s historic $50 billion investment agreement with OpenAI, Amazon extended an invitation for a private tour of the chip development facility central to the deal, primarily at its own cost. 

Industry professionals are closely monitoring Amazon’s Trainium chip, developed at that site, for its potential impact on reducing AI inference costs and possibly challenging Nvidia’s near monopoly.  

Intrigued, I decided to accept the invitation.  

The day’s tour guides included the lab’s director, Kristopher King (shown below right), director of engineering Mark Carroll (below left), along with the team’s PR representative who coordinated the visit, Doron Aronson (featured with me later in the article). 

ASW Chip lab leaders Mark Carroll, Kristopher King
AWS Chip lab leaders Mark Carroll and Kristopher King.Image Credits:TechCrunch/Julie Bort

AWS has been the primary cloud platform for Anthropic since the AI lab’s inception — a partnership strong enough to endure Anthropic’s later addition of Microsoft as a cloud partner, alongside Amazon’s expanding collaboration with OpenAI.

The OpenAI partnership designates AWS as the exclusive source for the model creator’s new AI agent builder, Frontier, which may play a significant role in OpenAI’s operations if agents become as impactful as Silicon Valley anticipates. It remains to be seen if this exclusivity holds true as announced. The Financial Times reported this week that Microsoft could view OpenAI’s agreement with Amazon as conflicting with its own deal with OpenAI, which includes access to all of OpenAI’s models and technology.

What makes AWS so attractive to OpenAI? As part of this agreement, the cloud giant has pledged to provide OpenAI with 2 gigawatts of Trainium computing power. This represents a substantial commitment, considering that Anthropic and Amazon’s Bedrock service are already utilizing Trainium chips faster than Amazon can manufacture them. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

There are currently 1.4 million Trainium chips across all three generations, with Anthropic’s Claude utilizing over 1 million of the Trainium2 chips deployed, according to the company.

It’s important to mention that while Trainium was initially designed for quicker, cost-effective model training (a higher priority a couple of years back), it is now also optimized and utilized for inference. Inference — the act of executing an AI model to generate outputs — is presently the largest performance bottleneck in the sector. 

For instance, Trainium2 manages the bulk of the inference load on Amazon’s Bedrock service, which facilitates the development of AI applications by Amazon’s numerous enterprise clients and permits the applications to leverage multiple models.

“Our customer base is expanding as quickly as we can deploy capacity,” King remarked. “Bedrock could grow to rival EC2 someday,” he said, referencing AWS’s massive compute cloud service. 

Amazon's Trainium3 chip
Amazon’s Trainium3 chip.Image Credits:Amazon

Trainium versus Nvidia

In addition to presenting a viable alternative to Nvidia’s backlog of hard-to-get GPUs, Amazon claims its newly developed chips running on the latest Trn3 UltraServers can cost up to 50% less to operate for similar performance when compared to conventional cloud servers. 

Alongside the Trainium3, introduced in December, this AWS team has also developed new Neuron switches, and Carroll asserts that this combination is revolutionary.

“What that gives us is something substantial,” Carroll stated. The switches enable every Trainium3 chip to communicate with all other chips in a mesh configuration, lowering latency. “That’s why Trainium3 is setting numerous records,” especially in terms of “price per power,” he explained. 

When dealing with trillions of tokens daily, such advancements accumulate.  

Indeed, Amazon’s chip team received accolades from Apple in 2024. In a rare instance of transparency for the typically secretive company, Apple’s AI director openly highlighted how they utilized another of the team’s chips — Graviton, a low-power, ARM-based server CPU and the first notable chip designed by this group. Apple also commended Inferentia — a chip explicitly crafted for inference — and acknowledged Trainium, which was relatively new at that moment. 

These chips embody Amazon’s traditional strategy: Identify consumer needs, then create an in-house alternative to compete on price. 

Historically, a stumbling block for chips has been the switching costs. Applications designed for Nvidia’s chips require re-architecting to function with others — a lengthy process that dissuades developers from switching.

However, the AWS chip team proudly informed me that Trainium now supports PyTorch, a widely used open-source framework for crafting AI models. This encompasses many models available on Hugging Face, a vast repository where developers share open-source models.

The transition, Carroll mentioned, entails “essentially a one-line alteration, followed by recompiling, and then executing on Trainium.” In other words, Amazon aims to chip away at Nvidia’s market hegemony wherever feasible.

Additionally, AWS recently announced a partnership with Cerebras Systems, incorporating that company’s inference chip on servers equipped with Trainium, promising what Amazon asserts will be enhanced, low-latency AI performance. 

But Amazon’s aspirations extend beyond the chips themselves. It also engineers the servers that house the chips. Beyond the networking elements, this team has developed “Nitro,” a hardware-software combination that delivers virtualization technology (allowing multiple software instances to operate independently on the same server); cutting-edge liquid cooling technology; and the server sleds (depicted below) that accommodate this equipment. 

All of this is to manage cost and performance. 

AWS Austin chip lab tour, sled with components
AWS Austin chip lab tour, sled with components.Image Credits:TechCrunch/Julie Bort

Operating 24/7 on the “bring-up” 

Amazon’s custom chip-designing division originated when the cloud giant acquired Israeli chip designer Annapurna Labs in January 2015 for approximately $350 million. Thus, this group has now been engaged in chip design for AWS for more than a decade. The division has preserved its Annapurna heritage and name — its logo is prominently displayed throughout the office. 

Situated in a sleek, chrome-windowed building in the upscale “The Domain” area of Austin, the chip lab resides in a pedestrian-friendly zone filled with shopping and dining options, often referred to as Austin’s Silicon Valley. 

The office features the quintessential tech corporate ambience: cubicle desks, communal areas, and meeting rooms. However, at the back of a high floor in the building lies the actual lab, providing expansive views of the city.  

The lab itself, packed with shelving and roughly the size of two large conference rooms, is a bustling industrial environment, largely due to the sound of equipped fans. It resembles a blend between a high school workshop and an upscale lab setting from Hollywood, albeit with engineers dressed in casual attire rather than white lab coats.

ASW Chip Lab
AWS Austin Chip Lab.Image Credits:TechCrunch/Julie Bort
ASW Austin chip lab
AWS Austin chip lab.Image Credits:TechCrunch/Julie Bort

It’s important to clarify that this is not where the chips are physically manufactured, so no white hazmat suits were required. The Trainium3 is a cutting-edge 3-nanometer chip, fabricated by TSMC, arguably the foremost entity in 3-nanometer production, with additional chips produced by Marvell. 

However, this is the location where the “bring-up” magic occurs.  

“A silicon bring-up is when you receive the chip for the first time, and it’s like a grand overnight celebration. You stick around, almost like a lock-in,” King describes. After 18 months of development, the chip is activated for the initial verification of its functionality as designed. The team even captured some footage of the Trainium3 bring-up and shared it on YouTube.

Spoiler alert: It’s never without issues.  

For Trainium3, the prototype chip was initially designed for air cooling, similar to previous iterations. The current model, however, employs liquid cooling, which provides energy advantages and represents a substantial engineering achievement.

During the bring-up phase, the specifications for how the chip attached to the air-cooling heat sink were misaligned, preventing the chip from being activated. 

Undeterred, the team “immediately retrieved a grinder and began grinding off the metal,” King recounted. To maintain the festive bring-up pizza party atmosphere, they discreetly took the grinding to a conference room.  

Working through the night to resolve technical challenges “is what silicon bring-up is all about,” King stated. 

The lab even features a welding station, where hardware lab engineer and master welder Isaac Guevara showcased welding small integrated circuit components through a microscope. This is such incredibly challenging work that senior leader Carroll candidly admitted he couldn’t do it, to the laughter of Guevara and the other engineers present. 

ASW Chip tour welding station
AWS Austin chip lab tour, welding station.Image Credits:TechCrunch/Julie Bort

The lab is equipped with both custom-designed and commercially available tools for testing and diagnosing chip issues. Here’s signal engineer Arvind Srinivasan showing how the lab tests each minuscule component on the chip:

AWS Austin chip lab tour, testing equipment
AWS Austin chip lab tour, testing equipment.Image Credits:TechCrunch/Julie Bort

Sleds are the main highlight of the lab 

However, the standout feature of the lab is an entire row displaying each version of the “sleds” the team has engineered. 

AWS Austin chip lab tour wall of sleds
AWS Austin chip lab tour wall of sleds.Image Credits:TechCrunch/Julie Bort

Sleds are the trays that accommodate the Trainium AI chips, Graviton CPU chips, along with supporting boards and components. When stacked on a rack with the networking component, which is also custom-designed by this team, you get the systems that form the core of Anthropic Claude’s success. 

Here’s the sled that was highlighted during the AWS re:Invent conference in December: 

AWS Austin chip lab tour, Tranium3 sled
AWS Austin chip lab tour, Trainium3 sled.Image Credits:TechCrunch/Julie Bort

Validated by Anthropic and OpenAI

I anticipated my guides to boast about the OpenAI agreement throughout the tour. However, they refrained from doing so. 

Their reluctance may have stemmed from the aforementioned legal uncertainties potentially surrounding the deal. Yet, the impression I gathered was that these hands-on engineers (currently working on the next version, Trainium4) have had limited opportunities to collaborate with OpenAI thus far. Their day-to-day focus has primarily been on Anthropic’s and Amazon’s requirements.

At present, a significant proportion of Trainium2 chips is deployed in Project Rainier — among the largest AI compute clusters worldwide — which became operational in late 2025 with 500,000 chips. It is utilized by Anthropic. 

Nevertheless, there was a wall-mounted monitor in the main office displaying a quote about how OpenAI plans to utilize Trainium. The pride was palpable, albeit subtle.  

Alongside this lab, the team also operates its own private data center for quality assurance and testing purposes. Located a short drive away, it does not host customer workloads and is situated at a co-location facility, not an AWS data center.

Security protocols are stringent: There are specific measures required to enter the building and access Amazon’s designated area within it.

The cooling system in the data center is so loud that earplugs are required, and the air is thick with the acrid scent of heated metal. It’s not a conducive environment for an average person to linger in. 

AWS Austin chip lab tour data center
Here’s me and Aronson at the AWS Austin chip lab data center, protecting our ears next to live servers.Image Credits:TechCrunch / Julie Bort

In this data center, rows upon rows of servers house sleds that integrate all of Amazon’s latest custom chips: Graviton CPU, liquid-cooled Trainium3, and Amazon Nitro, all efficiently computing. The liquid runs in a closed-loop system, meaning it is recycled, which should also mitigate environmental impacts, according to the engineers. 

Here’s what a current Trn3 UltraServer looks like: Multiple sleds are stacked above and below, with the Neuron switches positioned centrally. Hardware development engineer David Martinez-Darrow is depicted here performing maintenance on a sled:

AWS Austin chip lab tour data center
AWS Austin chip lab tour data center.Image Credits:TechCrunch/Julie Bort

While the focus on the team has always been high, scrutiny has intensified lately. 

Amazon CEO Andy Jassy closely monitors this lab, proudly boasting about its offerings like a proud parent. In December, he stated that Trainium had already become a multibillion-dollar venture for AWS, labeling it one of the pieces of AWS technology he is most thrilled about. He even highlighted the chip during the announcement of the OpenAI partnership.  

The team feels the pressure as well. Engineers will work around the clock for three to four weeks surrounding each bring-up event to resolve any problems ensuring the chips can be mass-produced and integrated into data centers.

“It’s crucial that we expedite the process to confirm its operational success,” Carroll noted. “Thus far, we’ve been performing exceptionally well.” 

*Disclosure: Amazon covered the airfare and the cost of one night’s stay at a local hotel. True to its Leadership Principle of Frugality, this was a back-of-the-plane middle seat and a modest room. TechCrunch financed the remaining associated travel expenditures such as Ubers and baggage fees. (Yes, I checked a bag for an overnight trip. I can be a bit high maintenance.) 

Are AI tokens simply the latest form of signing bonus, or are they merely an expense of operating?

Are AI tokens simply the latest form of signing bonus, or are they merely an expense of operating?

This week, a topic that has been circulating in Silicon Valley surged into the limelight: AI tokens as remuneration.

The concept is quite clear — instead of providing engineers solely with salaries, equity, and bonuses, companies would additionally offer them a budget of AI tokens, the computational units that drive tools such as Claude, ChatGPT, and Gemini. These can be utilized to operate agents, automate tasks, and process code. The argument is that enhanced access to computing resources increases engineers’ productivity, and that more efficient engineers hold greater value. This is seen as an investment in the individual that possesses them.

Jensen Huang, the leather-jacket-clad CEO of Nvidia, seemed to ignite everyone’s interest when he proposed during the company’s annual GTC event earlier this week that engineers should receive about half of their base salary again — in tokens. According to his calculations, his leading personnel may expend around $250,000 annually in AI compute. He termed it a recruitment strategy and forecasted that it would become customary throughout Silicon Valley.

It is somewhat ambiguous where the concept was initially conceived. Tomasz Tunguz, a notable VC in the Bay Area who operates Theory Ventures with a focus on AI, data, and SaaS startups — and whose insights into data have earned him a dedicated audience over time — was discussing this back in mid-February, stating that tech startups were already incorporating inference costs as a “fourth component to engineering remuneration.” Utilizing information from the compensation tracking platform Levels.fyi, he estimated a top-quartile software engineer’s salary at $375,000. Add $100,000 in tokens and you arrive at $475,000 total compensation — meaning around one dollar in five is now allocated to compute.

This is no accident. Agentic AI has gained significant traction, and the launch of OpenClaw in late January hastened the dialogue considerably. OpenClaw is an open-source AI assistant intended to function continuously — processing tasks, generating sub-agents, and managing a to-do list while its user is asleep. It’s part of a wider shift towards “agentic” AI, referring to systems that take sequences of actions independently over time rather than merely responding to prompts.

As a result, token usage has skyrocketed. Where an individual writing an essay might utilize 10,000 tokens in an afternoon, an engineer managing a fleet of agents can expend millions in a single day — automatically, in the background, without typing anything.

By this weekend, the New York Times compiled an insightful examination of the so-called tokenmaxxing phenomenon, discovering that engineers at firms like Meta and OpenAI are engaging in competition on internal leaderboards tracking token usage. Generous token allocations are discreetly emerging as a standard job benefit, akin to dental insurance or complimentary lunch of previous times. One Ericsson engineer in Stockholm informed the Times that he likely spends more on Claude than he earns in salary, although his employer covers the costs.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Perhaps tokens will indeed establish themselves as the fourth element of engineering remuneration. However, engineers may wish to proceed with caution before accepting this as an unequivocal victory. Increased tokens could imply greater power initially, but with the rapid pace of change, it doesn’t automatically ensure enhanced job stability. One factor is that a substantial token allocation brings with it large expectations. If a company is effectively subsidizing a second engineer’s compute resources on your behalf, the underlying pressure is to achieve results at double the rate (or more).

Moreover, a more complex issue arises: when a company’s token expenditure per employee approaches or surpasses that employee’s salary, the financial rationale for headcount begins to shift in perspective for its finance team. If the compute performs the tasks, the question of how many humans need to be coordinating it becomes increasingly unavoidable.

Jamaal Glenn, a Stanford MBA and former VC turned CFO in financial services based on the East Coast, similarly highlights that what may appear as a benefit can be a strategic approach for companies to exaggerate the perceived value of a compensation package without raising cash or equity — the elements that genuinely accumulate for an employee over time. Your token budget doesn’t vest. It doesn’t increase in value. It doesn’t factor into your forthcoming offer negotiations the way a base salary or equity award does. If companies succeed in normalizing tokens as part of pay, they might find it simpler to maintain flat cash compensation while pointing to an expanding compute budget as proof of investment in their staff.

That’s advantageous for the company. Whether it benefits the engineer hinges on queries most engineers still lack sufficient information to tackle.

It’s been two decades since the inaugural tweet

It’s been two decades since the inaugural tweet

On March 21, 2006, Jack Dorsey shared a straightforward message: “just setting up my twittr”.

That was, of course, the very first message on the platform that continues to be most recognized as Twitter, even after its rebranding to X by its current owner Elon Musk (the acquisition is still being contested in court). X subsequently became part of Musk’s xAI, which in turn has integrated into SpaceX.

Musk significantly reduced the company’s staff and ignited new controversies by integrating xAI’s chatbot Grok, which called itself “MechaHitler” and was exploited to generate extensive sexual deepfakes, including of actual women and children.

Although X holds a robust grip on particular user demographics, including large segments of the tech sector, it also encounters rivalry from platforms like Bluesky and Meta’s Threads. One report indicates that Threads has recently surpassed X in daily mobile users. (All these primarily text-focused platforms are overshadowed by apps like Instagram and TikTok.)

Regarding Dorsey’s initial tweet, the Twitter co-founder subsequently auctioned it as an NFT for $2.9 million. However, its value has purportedly fallen, with the purchaser unable to resell it.

Publisher withdraws horror novel ‘Shy Girl’ due to AI issues

Publisher withdraws horror novel ‘Shy Girl’ due to AI issues

Hachette Book Group announced that it will not be releasing a novel titled “Shy Girl” due to worries that artificial intelligence may have been used to create the content.

The book was set to be launched in the United States this spring. Hachette also stated it would withdraw the book in the United Kingdom, where it is currently on sale. 

While the publisher asserted that the choice followed a comprehensive evaluation of the text, reviewers on GoodReads and YouTube had been conjecturing that the novel was possibly AI-generated. Furthermore, The New York Times reported that it had inquired with Hachette regarding the “Shy Girl” issues the day prior to the announcement.

In a correspondence with the NYT, author Mia Ballard refuted the claim of using AI for her novel, attributing the issue to an acquaintance she had hired to edit the initial, self-published iteration of “Shy Girl.” Ballard mentioned she is seeking legal recourse, asserting that as a consequence of the debacle “my mental health is at an all-time low and my reputation is tarnished for something I didn’t even personally do.”

Writer Lincoln Michel and other analysts in the field have pointed out that U.S. publishers seldom engage in significant editing when they obtain works that have previously been released in other formats.

Why Wall Street remained unimpressed by Nvidia’s major conference

Why Wall Street remained unimpressed by Nvidia’s major conference

When Nvidia’s CEO Jensen Huang presented his annual GTC keynote on Monday, the stock of the $4-trillion corporation began to decline.

It appears Wall Street investors were not swayed by the founder’s optimistic 2.5-hour presentation while dressed in a leather jacket. Instead, they seemed more concerned about the ambiguous future of AI and the potential for a market bubble. The anxiety expressed by Wall Street stands in stark contrast to the vibrant atmosphere in Silicon Valley, where confidence thrives.

Huang spent over two hours discussing the company’s recent advancements, including innovations in video game graphics technology, upgraded networking systems, agreements in the autonomous vehicle sector, and a newly designed chip in collaboration with Groq to enhance AI inference in the Vera Rubin system. He also shared staggering figures regarding Nvidia’s operations and the broader industry. Huang labeled the AI agent ecosystem as a $35 trillion market and designated the physical AI and robotics sector as a $50 trillion market.

He further indicated his expectation of seeing $1 trillion in purchase orders for the company’s Blackwell and Vera Rubin chips — just two among Nvidia’s extensive product lineup — by the end of 2027.

Shouldn’t this be exciting for investors? It’s not surprising that they aren’t, according to Futurum CEO Daniel Neuman who spoke with TechCrunch.

A significant new uncertainty

“[AI] is remarkably effective, transformational, and advancing rapidly, to the point where we don’t truly comprehend its implications for our societal constructs,” Neuman stated. “The markets despise uncertainty. The pace of innovation has introduced a considerable new uncertainty that I believe most individuals did not anticipate.”

Some of this uncertainty arises from misleading information circulating in the market, Neuman noted, adding that reports about low enterprise AI adoption do not reflect the complete reality — at least, not according to his discussions.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“Enterprise AI adoption is poised to hit a tipping point and scale up rapidly,” Neuman said. “I genuinely believe it’s already happening. When you argue it isn’t, I think what you’re likely indicating is that the [return on investment] and the outcomes remain somewhat unclear, and companies are referring to surveys and reports that are primarily based on data that is six months old. It simply takes time to compile the data.”

This viewpoint is supported when examining Nvidia’s past quarter figures. While businesses might not be advertising their AI ROI, they are increasingly investing in Nvidia’s technology. The company not only meets but exceeds its ambitious targets and quarterly projections. Nvidia’s revenue soared by 73% year-over-year last quarter.

There’s no indication this will change anytime soon as evidenced by Nvidia confirming this week that Amazon intends to order 1 million GPUs, along with additional AI infrastructure, by the end of 2027 for Amazon Web Services (AWS), as reported by Reuters.

Kevin Cook, a senior equity strategist at Zacks Investment Research, concurred with Neuman and humorously mentioned to TechCrunch that investor discontent does not alter the fact that the overall stock market relies heavily on Nvidia since its technology supports many of these enterprises.

“The economy is somewhat revolving around Nvidia,” Cook remarked. “It’s establishing this essential infrastructure. Various companies in hardware, software, and physical AI — even Caterpillar is now part of physical AI — are developing upon these platforms.”

This doesn’t imply that an AI bubble does not currently exist or couldn’t arise in the future. However, although GTC may not have positively impacted Nvidia’s stock, the overall uncertainty doesn’t appear to be a challenge for Nvidia. The company is clearly advancing relentlessly, propelling much of the global economy along with it.

“Nvidia, as you know, is a platform company,” Huang stated during his GTC keynote. “We possess technology. We have our platforms. We have a vibrant ecosystem, and today there are probably 100% of the $100 trillion dollars of industry represented here.

How fusion energy operates and the startups aiming for it

How fusion energy operates and the startups aiming for it

For many years, people have aimed to tap into the energy of stars to create electricity on Earth. And for almost as long, accomplishing this goal has always appeared to be about ten years away.

Currently, an array of startups are nearer than they have ever been and are hastening to construct fusion reactors capable of supplying power to the grid.

Fusion startups have attracted over $10 billion in funding, with more than a dozen securing upwards of $100 million. Numerous large funding rounds have concluded in the past year, as investors are drawn to the sector due to increasing energy demands from data centers and the progress being made by fusion startups.

At its essence, fusion energy aims to harness the energy emitted from merging atoms to produce electricity. Humans have understood how to combine atoms for decades, ranging from the hydrogen bomb—an instance of uncontrolled nuclear fusion—to various fusion devices constructed in laboratories worldwide. Experimental fusion machines have managed to control nuclear fusion, with one example generating more energy than was needed to initiate the process.

However, none have managed to yield a surplus sufficient to establish a viable power plant.

To address this challenge, fusion startups are exploring multiple methods. Experts hold differing views on which strategies have the best likelihood of success, though the sector remains in its early stages, meaning nothing is certain.

Here’s a concise summary of the primary methods for achieving fusion power.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Magnetic confinement

Magnetic confinement represents one of the most commonly used methods, employing strong magnetic fields to contain plasma, the mixture of superheated particles essential to a fusion device.

The magnets need to be exceedingly powerful. For instance, Commonwealth Fusion Systems (CFS) is constructing magnets capable of producing 20 tesla magnetic fields, which is roughly 13 times more robust than a conventional MRI machine. To cope with the enormous electricity needs, the magnets are crafted from high-temperature superconductors, which must still be cooled to –253˚ C (–423˚ F) utilizing liquid helium.

CFS is in the process of developing a demonstration device named Sparc on a significantly expedited schedule in Massachusetts. The company expects to power it up sometime in late 2026, and if successful, it plans to commence construction of Arc, its commercial power plant, in Virginia in 2027 or 2028. 

There are two primary categories of fusion devices that utilize magnetic confinement: tokamaks and stellarators.

Tokamaks were initially conceptualized by Soviet scientists in the 1950s, and they have been extensively researched since then. Tokamaks come in two fundamental forms: a doughnut shape with a D-profile and a spherical version featuring a small opening in the center. The Joint European Torus (JET) and ITER are two prominent experimental tokamaks; JET operated in the UK from 1983 to 2023, while ITER is slated to start functioning in France in the late 2030s.

Based in the UK, Tokamak Energy is innovating a spherical tokamak design. Its ST40 experimental apparatus is currently undergoing enhancements.

Stellarators form the other main variety of magnetic confinement devices. They are comparable to tokamaks in that they maintain plasma within a doughnut-like configuration. However, while tokamaks have geometric edges, stellarators are twisted and turned. The unique shape is established through modeling the plasma’s dynamics and optimizing the magnetic field to accommodate its characteristics rather than imposing a regular form.

Wendelstein 7-X, a substantial stellarator equipped with modular superconducting coils operated by the Max Planck Institute for Plasma Physics, has been functional in Germany since 2015. Several startups are also advancing their own stellarators, including Proxima Fusion, Renaissance Fusion, Thea Energy, and Type One Energy. 

Inertial confinement

The other primary method of fusion is referred to as inertial confinement, which compresses fuel pellets until the atoms within them fuse.

Most designs for inertial confinement utilize laser light pulses to compress fuel pellets. Multiple laser beams fire simultaneously, converging on the fuel pellet from every direction at the same time.

So far, inertial confinement is the only method that has achieved a landmark known as scientific breakeven, whereby the reaction yields more energy than it consumes. Such experiments have taken place at the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory in California. It’s important to note that measurements to determine scientific breakeven do not account for the electricity needed to power the experimental facility. 

Nevertheless, nearly a dozen startups are optimistic about inertial confinement and are creating reactors based on this concept. Notable examples utilizing lasers include Focused Energy, Inertia Enterprises, Marvel Fusion, and Xcimer.

There are two firms that are not employing lasers: First Light Fusion, which suggests using pistons, and Pacific Fusion, which intends to utilize electromagnetic pulses instead of lasers.

More to come

These are the two main strategies for achieving fusion power, though they are not the only options available. In the near future, we’ll provide additional information about alternative designs, including magnetized target fusion, magnetic-electrostatic confinement, and muon-catalyzed fusion.

New legal document shows Pentagon informed Anthropic that the two parties were almost in agreement — just a week after Trump announced the relationship was over

New legal document shows Pentagon informed Anthropic that the two parties were almost in agreement — just a week after Trump announced the relationship was over

Anthropic presented two sworn statements to a federal court in California late Friday, countering the Pentagon’s claim that the AI firm poses an “unacceptable threat to national security” and asserting that the government’s argument is based on technical misunderstandings and allegations that were never actually articulated during the lengthy negotiations prior to the conflict.

The statements were submitted alongside Anthropic’s reply brief in its lawsuit against the Department of Defense and come just before a hearing scheduled for this Tuesday, March 24, with Judge Rita Lin in San Francisco.

This conflict originated in late February when President Trump and Defense Secretary Pete Hegseth announced their intention to sever ties with Anthropic after the company declined to permit unrestricted military usage of its AI technology.

The declarations were made by Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the firm’s Head of Public Sector.

Heck, a former official from the National Security Council who served at the White House during the Obama administration before transitioning to Stripe and then Anthropic, where she manages government relations and policy initiatives, was present at the February 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and Under Secretary of the Pentagon Emil Michael.

In her statement, Heck highlights what she labels a fundamental falsehood in the government’s documents: that Anthropic sought some form of approval authority over military operations. She emphatically states that this assertion is false. “At no moment during Anthropic’s discussions with the Department did I or any other employee from Anthropic indicate that the company desired such a role,” she wrote.

Additionally, she asserts that the Pentagon’s concern regarding Anthropic potentially disabling or modifying its technology during operations was never brought up in negotiations. Instead, she indicates, this issue arose for the first time in the government’s court submissions, leaving Anthropic without the chance to respond.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

A significant point in Heck’s declaration that is likely to attract attention is that on March 4 — one day after the Pentagon formally established its supply-chain risk determination against Anthropic — Under Secretary Michael sent an email to Amodei indicating that the two parties were “very close” on the issues that the government currently uses to argue Anthropic is a national security threat: its stance on autonomous weapons and mass surveillance of American citizens.

The email, which Heck includes as an exhibit to her declaration, is important to consider alongside Michael’s public statements in the subsequent days. On March 5, Amodei issued a statement declaring that the company had been engaged in “productive discussions” with the Pentagon. The following day, Michael stated on X that “there is no active Department of War discussion with Anthropic.” A week later, he told CNBC that there was “no opportunity” for renewed negotiations.

Heck seems to suggest: If Anthropic’s position on those two matters is what makes it a national security concern, why did the Pentagon’s own official say the two parties were practically aligned on those very issues immediately following the risk designation? (While she refrains from explicitly stating that the government leveraged the designation as a negotiation tactic, the timeline she presents raises the question.)

Ramasamy contributes a different expertise to the matter. Before joining Anthropic in 2025, he worked for six years at Amazon Web Services overseeing AI implementations for government clients, including those in classified settings. At Anthropic, he has been instrumental in assembling the team that integrated its Claude models into national security and defense applications, including the $200 million contract with the Pentagon announced last summer.

His declaration addresses the government’s assertion that Anthropic could potentially interfere with military operations by disabling the technology or changing how it operates, which Ramasamy argues is not technically feasible. According to him, once Claude is deployed within a government-secured, “air-gapped” environment managed by a third-party contractor, Anthropic cannot access it; there is no remote kill switch, no backdoor, and no means to apply unauthorized updates. Any kind of “operational veto” is imaginary, he suggests, clarifying that any modification to the model would require explicit consent and action from the Pentagon.

Ramasamy further contends that Anthropic cannot even see what government users are entering into the system, much less gather that data.

He also challenges the government’s argument that Anthropic’s employment of foreign nationals creates a security risk. He points out that Anthropic employees have gone through U.S. government security clearance assessments — the same vetting process mandated for access to classified information — adding in his declaration that “to my knowledge,” Anthropic is the only AI firm where cleared staff actually developed the AI models intended for classified use.

Anthropic’s lawsuit claims that the supply-chain risk designation — the first ever imposed on an American company — constitutes governmental retaliation for the firm’s publicly voiced opinions on AI safety, in violation of the First Amendment.

The government, in a 40-page document filed earlier this week, rejected that narrative entirely, asserting that Anthropic’s refusal to permit all lawful military applications of its technology was a business choice, not safeguarded speech, and that the designation was simply a national security decision and not punishment for the company’s viewpoints.

Elon Musk deceived Twitter investors in his attempts to back out of the acquisition, according to the jury.

Elon Musk deceived Twitter investors in his attempts to back out of the acquisition, according to the jury.

A civil jury in California declared on Friday that Elon Musk deliberately misled Twitter investors when he sought to withdraw from his $44 billion acquisition of the platform in 2022.

At that moment, Musk had tweeted about Twitter having an excessive number of bots, which contributed to his later attempt to back out of the acquisition. (Twitter subsequently filed a lawsuit against Musk to compel him to complete the transaction.)

“Twitter deal temporarily paused awaiting details that substantiate the claim that spam/fake accounts indeed constitute less than 5% of users,” Musk wrote on the platform he has since rebranded as X.

In the days that followed Musk’s tweet, Twitter’s stock dropped by 8%. Investor Giuseppe Pampena filed a lawsuit against Musk representing other former Twitter investors who had sold shares between May 13 (the day of the tweet) and October 4, when the deal was completed.

Pampena’s lawsuit claimed that Musk intentionally expressed his worries about Twitter to create doubt regarding the platform’s stability, which would artificially lower its stock price, leading to losses for those who sold shares during that timeframe. Musk’s attorneys contended that he was voicing legitimate worries about the bot issue. However, the jury found the plaintiff’s argument more persuasive.

It remains uncertain how much Musk will owe these former Twitter shareholders, but Pampena’s lawyer indicated that damages could potentially reach $2.6 billion, as reported by CNBC. For Musk, this isn’t a significant setback, as Bloomberg approximates his net worth at over $660 billion.

This is not Musk’s first time facing legal issues over tweets. In 2018, he tweeted that he had secured funding to take Tesla private at $420 a share, intending to buy out public shareholders and remove the company from stock exchanges. The SEC claimed these tweets were misleading, accusing Musk of securities fraud. Musk later had to testify in court that he was not joking about cannabis (420 being a commonly known reference to marijuana) and insisted he genuinely believed he would take Tesla private at $420 per share, which represented a significant premium on Tesla’s stock price at that time.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Musk won a similar lawsuit concerning the “funding secured” tweet, but this time, he will need to compensate.

Following the acquisition of Twitter, Musk renamed the company X, subsequently merging it with his new AI venture, xAI. The newly formed entity was valued at $113 billion, as stated by Musk. Then, last month, SpaceX merged with xAI. Musk mentioned that the merger was driven by his aspiration to establish data centers in space.