Would you like to construct a robotic snowman?

Would you like to construct a robotic snowman?

Nvidia’s GTC event showcased a plethora of highlights: sales forecasts soaring into the trillions, groundbreaking graphics tech capable of transforming video game aesthetics, bold claims that every business ought to implement an OpenClaw strategy, and even a robotic rendition of the cherished snowman Olaf from Disney’s “Frozen.”

In the latest episode of TechCrunch’s Equity podcast, TechCrunch’s Kirsten Korosec, Sean O’Kane, and I discussed CEO Jensen Huang’s keynote and explored its implications for Nvidia’s prospects. Additionally, a significant portion of our talk centered on poor Olaf, whose microphone had to be muted when he began going off-topic.

Even if the demonstration had gone without a hitch, Sean may still have expressed some concerns, as he pointed out these presentations typically concentrate on “the engineering challenges” rather than the “truly complex gray areas” socially.

“What if a kid kicks Olaf over?” Sean wondered. “And every other child witnessing Olaf being toppled has their entire Disney experience spoiled, damaging the brand?”

Here’s a preview of our dialogue, condensed for brevity and clarity, below.

Anthony: [CEO Jensen Huang] essentially stated that it’s imperative for every business to have an OpenClaw strategy now. I view that as an exceedingly grand statement meant to capture attention; it’s fascinating considering the current pivotal moment for OpenClaw. 

The founder has transitioned to OpenAI. Thus, it is now an open-source initiative that could potentially thrive and evolve independently of its originator, or it could stagnate. If companies like Nvidia are heavily investing into it, then it’s more likely to continue developing. However, it will be intriguing to see a year from now whether that statement proves to be insightful or if people are saying, “Open what?”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Kirsten: For Nvidia, it costs them virtually nothing in the larger context to initiate what they refer to as NemoClaw, an open-source endeavor they developed with the OpenClaw originator. Yet, if they fail to act, they stand to lose a lot. Hence, my interpretation of Jensen’s remark that “Every enterprise needs to have an OpenClaw strategy” is that “Nvidia must devise a solution or strategy for businesses, as if it succeeds, it opens another avenue for Nvidia to align with numerous other firms.” Therefore, the risk of inaction is far greater than attempting something that may not lead anywhere.

Sean: The fundamental inquiry here is why we haven’t discussed what is evidently the ultimate goal for Nvidia, which could make it the first $100 trillion entity: a robot Olaf.

Anthony: How could I overlook that?

Kirsten: Anthony, just fast forward to the conclusion of the two and a half hours to catch this.

Thus, the Olaf robot makes its appearance, a signature move for Jensen, who revels in these demonstrations, which can vary in success. It also serves to showcase Nvidia’s innovation in robotics; I’m unsure if Olaf was genuinely speaking in real-time or if it was pre-programmed — it felt a touch pre-set, featuring specific trigger words.

However, the most entertaining aspect was that they had to mute its mic because it started rambling at the end. Then it moved toward its exit and was being lowered gradually. You could still see it speak on video, albeit without a mic.

Sean: Now we just need to equip this little robot with a wheelbase. I know the ideal founder who can provide that. 

Honestly, these demonstrations always have a silly touch. I prefer not to take the podium, since we’ve touched on this earlier this week, but this was an impressive demo until it slightly stumbled.

This serves as yet another excellent example of how robotics presents numerous intriguing engineering dilemmas, fascinating physics challenges, and compelling integration issues; all of this was depicted in collaboration with Disney, promising the future of Disney parks where you can interact with Olaf from “Frozen” and capture photographs.

Nevertheless, these initiatives often ignore — or at least do not emphasize in events like this — the myriad other factors to consider when deploying such technologies. A notable YouTuber, Defunctland, produced an extensive video on this topic — four hours long, not overly lengthy — discussing Disney’s history with integrating robotic innovations in their parks.

The engineering hurdles are genuinely captivating, and it’s enlightening to learn about that past, but it consistently circles back to the same inquiry: What happens if a child kicks Olaf over? Then every other child witnessing Olaf being knocked down has their entire Disney trip ruined, negatively impacting the brand?

There’s a substantial social aspect associated with all this. Although it may sound trivial, this question is also pivotal in discussions regarding humanoid robots. There’s a lot of excitement surrounding various innovations, yet the conversation about the complex and murky social challenges involved in their integration into people’s lives remains limited. The engineering challenges, while impressive, consistently take center stage.

Kirsten: I have a counter-argument, and then we must transition to our next [topic]. This presents a job creation opportunity, as Olaf will need a human caretaker at Disneyland, likely dressed as Elsa or something similar. One could envision this engineering endeavor creating jobs.

Loading the player…

Cursor acknowledges that its latest coding framework is founded upon Moonshot AI's Kimi.

Cursor acknowledges that its latest coding framework is founded upon Moonshot AI’s Kimi.

The AI coding firm Cursor introduced a new model this week named Composer 2, which it touted as delivering “frontier-level coding intelligence.” 

Nevertheless, an X user known as Fynn quickly contended that Composer 2 was merely “Kimi 2.5” enhanced with extra reinforcement learning — Kimi 2.5 is an open-source model that was recently unveiled by Moonshot AI, a Chinese enterprise supported by Alibaba and HongShan (previously Sequoia China). 

To support this claim, Fynn referenced code that appeared to reveal Kimi as the underlying model.

“[A]t least change the model ID,” they derided.

This was an unexpected turn, considering Cursor is a well-capitalized U.S. startup that secured a $2.3 billion funding round last autumn at a $29.3 billion valuation, and is said to be generating over $2 billion in annualized revenue. Moreover, the firm did not mention Moonshot AI or Kimi in its announcement.

However, Lee Robinson, Cursor’s vice president of developer education, soon admitted, “Yep, Composer 2 began from an open-source foundation!” But he clarified, “Only about 1/4 of the compute allocated to the final model came from the base, with the remainder sourced from our training.” Consequently, he noted that Composer 2’s performance across various benchmarks is “very different” from that of Kimi.

Robinson also asserted that Cursor’s application of Kimi aligned with the terms of its license, a notion reiterated by the Kimi account on X in a subsequent message congratulating Cursor, stating that Cursor utilized Kimi “as part of an authorized commercial partnership” with Fireworks AI.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“We are delighted to see Kimi-k2.5 serving as the foundation,” the Kimi account stated. “Witnessing our model seamlessly integrated through Cursor’s ongoing pretraining & intensive RL training is the open model ecosystem we cherish supporting.”

So why not recognize Kimi from the beginning? Beyond any potential embarrassment of not developing a model independently, leveraging a Chinese model might feel particularly sensitive at this moment, especially with the so-called AI “arms race” often depicted as a critical struggle between the United States and China. (For instance, observe Silicon Valley’s noticeable anxiety after the Chinese company DeepSeek launched a competing model early last year.)

Cursor co-founder Aman Sanger acknowledged, “Not mentioning the Kimi base in our blog initially was a mistake. We’ll rectify that for the next model.”

Elon Musk reveals plans for chip production at SpaceX and Tesla

Elon Musk reveals plans for chip production at SpaceX and Tesla

Recently, Elon Musk detailed ambitious aspirations for a collaboration in chip production involving his companies, Tesla and SpaceX.

According to Bloomberg, Musk revealed his intentions on Saturday night during an event in Austin, Texas, presenting a photo that indicates the “Terafab” facility he envisions will be constructed close to Tesla’s Austin base and “gigafactory.”

Musk articulated that he’s pursuing this venture because chip manufacturers are not producing semiconductors rapidly enough to meet the demands of his companies’ artificial intelligence and robotics: “We either establish the Terafab or we lack the chips, and we require the chips, hence we establish the Terafab.”

The aim is to produce chips capable of supporting 100 to 200 gigawatts of computational power annually on Earth, as well as a terawatt in space, Musk stated. He did not provide a timeline for these initiatives.

As noted by Bloomberg, Musk lacks a background in semiconductor fabrication; however, he is known for frequently overpromising regarding objectives and deadlines. 

TechCrunch Mobility: Uber omnipresent, simultaneously

TechCrunch Mobility: Uber omnipresent, simultaneously

Welcome once again to TechCrunch Mobility, your primary source for updates and perspectives on the forthcoming landscape of transportation. To receive this directly to your email, subscribe here for free — simply click TechCrunch Mobility!

If you haven’t been observing, Uber appears to be everywhere, particularly in the realm of autonomous vehicles. The firm offloaded Uber ATG, its internal unit for autonomous vehicle development, in 2020. Uber divested several of its ambitious ventures — while still holding an equity interest in all of them — to sharpen its focus on its essential services of delivery and ride-hailing. 

Nonetheless, Uber has not entirely abandoned AVs. Over the last two years, it has secured partnerships with numerous companies specializing in autonomous vehicle technology in areas like delivery, drones, trucking, and robotaxis. Its approach has been global, striking deals with Chinese firms to introduce robotaxis in Europe and the Middle East, along with collaborations with startups like the U.K.-based Wayve. 

Now, there’s a new partnership with Rivian. The gist of the agreement is that Uber will make an initial investment of $300 million in Rivian and will acquire 10,000 fully autonomous R2 robotaxis in anticipation of a rollout in San Francisco and Miami by 2028. Uber has the option to purchase an additional 40,000 starting in 2030. This fleet will be utilized exclusively within Uber’s platform. 

Here’s my perspective on this agreement. Although the total value of the deal could reach $1.25 billion, Uber’s initial capital requirement is relatively minor. Plus, the risk is predominantly on Rivian. This arrangement is also unique as Uber is both the developer of the self-driving technology and the manufacturer of the vehicle. 

Rivian has yet to commence production of the R2 SUV, nor has it tested or deployed a self-driving system tailored for robotaxis. Raising the bar even higher, this robotaxi is intended to be built in Rivian’s Georgia facility, which remains under construction. 

Moreover, the EV manufacturer has already made at least one concession in pursuit of this goal. Rivian has indicated that it no longer anticipates achieving its profitability target in 2027, mainly due to the considerable financial resources it is investing in its autonomy initiatives.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In our newsletter, we conducted a survey asking if the risks for Rivian are too significant. Subscribe here to receive Mobility in your inbox and make your opinion known in our surveys!

A little bird

blinky cat bird green
Image Credits:Bryce Durbin

On the topic of Uber, a little bird revealed that the ride-hailing giant may have been negotiating with Rivian regarding its robotaxi deal for quite some time. One individual with direct knowledge of both firms indicated that such a deal wouldn’t materialize instantly. When I sought further details, I was met with a question: “Does RJ strike you as someone with such a short strategic viewpoint?” Touché!

Have a tip for us? Reach out to Kirsten Korosec at [email protected] or via Signal at kkorosec.07, or email Sean O’Kane at [email protected]

Deals!

money the station
Image Credits:Bryce Durbin

Similar to Uber, Nvidia is omnipresent. Or at least aims to be. The firm has made countless investments — be it direct cash infusions or in-kind chip agreements — in autonomous vehicle technology firms. Additionally, it is establishing partnerships with automotive manufacturers — as demonstrated this week during its GTC conference — to promote its autonomous vehicle development platform known as Nvidia Drive Hyperion. 

Nvidia CEO Jensen Huang unveiled new or expanded partnerships with BYD, Geely, Hyundai, and Nissan for its AV development platform during his GTC keynote. GM, Mercedes-Benz, and Toyota have previously signed agreements with Nvidia to utilize the platform. 

Nvidia has been forming partnerships with auto manufacturers for years, but the urgency and detail surrounding AVs is noteworthy.  

“We have entered the ChatGPT moment for self-driving vehicles. We now realize that independent vehicle operation is achievable,” Huang stated during his GTC address, highlighting that collectively, the four manufacturers produce 18 million cars annually.

Other notable deals …

Advanced Navigation, an Australian startup focused on navigation and autonomous solutions, secured $110 million in a Series C investment round led by Airtree Ventures, with strategic contributions from Quadrant Private Equity and the National Reconstruction Fund Corporation (NRFC).

Arc Boat Company, the electric boat startup from Los Angeles, raised $50 million in a Series C funding round from Eclipse, a16z, Menlo Ventures, Lowercarbon Capital, Necessary Ventures, and Offline Ventures.

BusRight, the school bus routing and technology startup, obtained over $30 million in a round led by Volition Capital.

Jeff Bezos is reportedly seeking to raise $100 billion for a new investment fund focused on acquiring businesses in significant industrial sectors — such as automotive and aerospace. The aim is to modernize these businesses utilizing AI techniques developed by Bezos’ new endeavor, Project Prometheus. 

Rivr, a Zurich-based startup known for its autonomous delivery robot that can climb stairs, has been acquired by Amazon. The specifics of the deal remain undisclosed.

Trevor Milton, the founder of the now-defunct electric truck startup Nikola who received a pardon from President Trump, is attempting to raise $1 billion for AI-enhanced aircraft. 

Zenobē Energy has acquired Revolv, a San Francisco-based fleet charging startup, for an undisclosed sum. 

Notable reads and other tidbits

Image Credits:Bryce Durbin

A cyberattack targeting U.S. vehicle breathalyzer company Intoxalock has left drivers nationwide stranded and unable to start their cars.

Kodiak has expanded its commercial autonomous freight services to include the Dallas-El Paso corridor. This marks the company’s second major route and is fundamental to its network expansion strategy, as stated by COO Michael Wiesinger.

The National Highway Traffic Safety Administration has intensified its investigation into the effectiveness of Tesla’s Full Self-Driving (Supervised) software under low-visibility circumstances. The inquiry has now reached an “engineering analysis,” representing the highest level of scrutiny and a necessary step prior to the agency advising a company to initiate a recall. 

One more thing …

Image Credits:Jay Janner / The Austin American-Statesman / Getty Images

In last week’s edition, I mentioned to keep an eye out for my discussion with Rivian founder and CEO RJ Scaringe. We delved into various topics, and I found his insights on robotics particularly engaging. In summary, Scaringe believes that companies are mishandling industrial robotics. His new venture, Mind Robotics, aims to adopt a different approach, emphasizing robotic hands while avoiding the production of robots capable of performing backflips. 

As Scaringe shared: “What’s often overlooked in industrial [robotics] — and this is one key point we recognize clearly — is that the work is executed by the hands. Thus, hands are incredibly vital. From the perspective of a robotic system, everything else serves the purpose of positioning the hands correctly. Consequently, the robots’ ability to execute highly intricate maneuvers, such as backflips, only indicates that they contain unnecessary complexity for most tasks.” You can read the full interview here.

Delve charged with deceiving clients through ‘false compliance’

Delve charged with deceiving clients through ‘false compliance’

A Substack post released this week anonymously accuses the compliance startup Delve of “misleadingly” assuring “hundreds of customers they were compliant” with privacy and security regulations, which could lead to “criminal liability under HIPAA and significant fines under GDPR.”

Delve is a startup backed by Y Combinator that last year declared it had raised a $32 million Series A at a $300 million valuation. (The funding round was led by Insight Partners.) On Friday, the startup made efforts to counter the allegations on its blog, labeling the Substack post as “misleading” and asserting it “contains several inaccurate assertions.”

The Substack post is attributed to “DeepDelver,” who identified themselves as an employee of a (now former) client of Delve. When responding to emailed inquiries from TechCrunch, DeepDelver stated that they and their associates “decided to stay anonymous due to concerns of retaliation from Delve.”

In their narrative, DeepDelver recalled receiving an email in December alleging that the startup had “shared a spreadsheet containing confidential client documents.” While Delve CEO Karun Kaushik reportedly reassured customers in a follow-up email that they were compliant and that no outside party accessed sensitive information, DeepDelver indicated that they and other clients had grown wary.

“Having a common experience of feeling disappointed with the Delve interaction and sensing something suspicious, we decided to collaborate and investigate collectively,” they stated.

Their finding? That Delve “claims to be the quickest platform by fabricating evidence, generating auditor conclusions on behalf of certification companies that rubber stamp reports, and bypassing significant framework prerequisites while assuring clients they’ve attained 100% compliance.”

DeepDelver elaborated on these claims, alleging that the startup provided clients with “fake documentation of board meetings, tests, and processes that never took place,” then compelling those clients to “choose between using fake documentation or conducting mostly manual tasks with minimal genuine automation or AI.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

DeepDelver also asserted that nearly all of Delve’s clients appear to have passed through two auditing firms, Accorp and Gradient, which they referred to as “part of the same operation,” primarily functioning in India, with only a nominal presence in the U.S.

These firms, they claimed, merely rubber-stamp reports produced by Delve. Consequently, DeepDelver stated the startup “reverses” the conventional compliance structure: “By creating auditor conclusions, test processes, and final reports before any independent evaluation takes place, Delve positions itself as both the implementer and examiner. This is not a minor detail. It represents a structural fraud that nullifies the entire attestation.”

Apart from accusing Delve of misleading its clients, DeepDelver indicated that the startup is enabling those clients to “mislead the public by maintaining trust pages that include security measures that were never enacted.” 

DeepDelver mentioned that while their organization was voicing concerns about Delve, the startup “sent us numerous boxes of donuts […] to keep us satisfied.” Nevertheless, DeepDelver’s employer reportedly unpublished its trust page and has ceased relying on the startup for compliance.

In response to the allegations, Delve stated that it does not produce compliance reports at all. Instead, it operates as an “automation platform” that aggregates compliance information and provides auditors with access to that data.

“Final reports and opinions are issued exclusively by independent, licensed auditors, not Delve,” the company asserted.

Delve further indicated that its clients “can select to partner with an auditor of their preference or opt to work with one from Delve’s network of independent, accredited third-party audit firms.” Those auditors, the startup noted, are “established firms widely recognized across the industry, including by other compliance platforms.”

In refuting the allegation of providing clients with “fake evidence,” Delve responded that it is merely offering “templates to assist teams in documenting their processes in line with compliance requirements, as do other compliance providers.”

“Draft templates differ from ‘pre-filled evidence,’” the company stated.

Delve added that it is “actively examining any leaks” and is “continuing to review the Substack.”

When asked about Delve’s rebuttal, DeepDelver expressed to TechCrunch that they were “confounded by the sloppiness, awkwardness, and boldness of it.”

“They are trying to slither out [of] accountability by denying they have ‘pre-filled evidence’ but labeling it as ‘templates’ instead, effectively placing the responsibility on clients for adopting the ‘templates’ as is,” DeepDelver stated. “They’re asserting that they are not responsible for ‘issuing’ the report, which is easy to claim if you interpret issuing a report as providing the final endorsement.”

They added that there are “several very serious allegations” that Delve completely failed to address: “The India claim, the absence of AI (they only reference ‘automations’), and the trust (lol) page featuring controls that were never implemented.”

Evidently, DeepDelver is not finished with its critique, as it promised, “Part II will follow shortly.”

Additionally, following the initial Substack post, a user named James Zhou on X stated they managed to access sensitive details from Delve, such as employee background checks and equity vesting schedules. Dvuln founder Jamieson O’Reilly shared further insights from what O’Reilly described as a discussion with Zhou about “multiple glaring security vulnerabilities in Delve’s external attack surface.”

TechCrunch reached out via email for additional comments to the media contact provided on Delve’s website. The email was undeliverable, but after this article was released, I received a calendar invitation for a “Delve demonstration” set for later this week.

This article was initially published on March 21, 2026. It has been updated with emailed responses from DeepDelver, additional information regarding alleged security vulnerabilities provided by Jamieson O’Reilly, and further details about Delve’s reaction to TechCrunch.

A private visit to Amazon’s Trainium laboratory, the processor that has impressed Anthropic, OpenAI, and even Apple.

A private visit to Amazon’s Trainium laboratory, the processor that has impressed Anthropic, OpenAI, and even Apple.

Soon after Amazon CEO Andy Jassy unveiled AWS’s historic $50 billion investment agreement with OpenAI, Amazon extended an invitation for a private tour of the chip development facility central to the deal, primarily at its own cost. 

Industry professionals are closely monitoring Amazon’s Trainium chip, developed at that site, for its potential impact on reducing AI inference costs and possibly challenging Nvidia’s near monopoly.  

Intrigued, I decided to accept the invitation.  

The day’s tour guides included the lab’s director, Kristopher King (shown below right), director of engineering Mark Carroll (below left), along with the team’s PR representative who coordinated the visit, Doron Aronson (featured with me later in the article). 

ASW Chip lab leaders Mark Carroll, Kristopher King
AWS Chip lab leaders Mark Carroll and Kristopher King.Image Credits:TechCrunch/Julie Bort

AWS has been the primary cloud platform for Anthropic since the AI lab’s inception — a partnership strong enough to endure Anthropic’s later addition of Microsoft as a cloud partner, alongside Amazon’s expanding collaboration with OpenAI.

The OpenAI partnership designates AWS as the exclusive source for the model creator’s new AI agent builder, Frontier, which may play a significant role in OpenAI’s operations if agents become as impactful as Silicon Valley anticipates. It remains to be seen if this exclusivity holds true as announced. The Financial Times reported this week that Microsoft could view OpenAI’s agreement with Amazon as conflicting with its own deal with OpenAI, which includes access to all of OpenAI’s models and technology.

What makes AWS so attractive to OpenAI? As part of this agreement, the cloud giant has pledged to provide OpenAI with 2 gigawatts of Trainium computing power. This represents a substantial commitment, considering that Anthropic and Amazon’s Bedrock service are already utilizing Trainium chips faster than Amazon can manufacture them. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

There are currently 1.4 million Trainium chips across all three generations, with Anthropic’s Claude utilizing over 1 million of the Trainium2 chips deployed, according to the company.

It’s important to mention that while Trainium was initially designed for quicker, cost-effective model training (a higher priority a couple of years back), it is now also optimized and utilized for inference. Inference — the act of executing an AI model to generate outputs — is presently the largest performance bottleneck in the sector. 

For instance, Trainium2 manages the bulk of the inference load on Amazon’s Bedrock service, which facilitates the development of AI applications by Amazon’s numerous enterprise clients and permits the applications to leverage multiple models.

“Our customer base is expanding as quickly as we can deploy capacity,” King remarked. “Bedrock could grow to rival EC2 someday,” he said, referencing AWS’s massive compute cloud service. 

Amazon's Trainium3 chip
Amazon’s Trainium3 chip.Image Credits:Amazon

Trainium versus Nvidia

In addition to presenting a viable alternative to Nvidia’s backlog of hard-to-get GPUs, Amazon claims its newly developed chips running on the latest Trn3 UltraServers can cost up to 50% less to operate for similar performance when compared to conventional cloud servers. 

Alongside the Trainium3, introduced in December, this AWS team has also developed new Neuron switches, and Carroll asserts that this combination is revolutionary.

“What that gives us is something substantial,” Carroll stated. The switches enable every Trainium3 chip to communicate with all other chips in a mesh configuration, lowering latency. “That’s why Trainium3 is setting numerous records,” especially in terms of “price per power,” he explained. 

When dealing with trillions of tokens daily, such advancements accumulate.  

Indeed, Amazon’s chip team received accolades from Apple in 2024. In a rare instance of transparency for the typically secretive company, Apple’s AI director openly highlighted how they utilized another of the team’s chips — Graviton, a low-power, ARM-based server CPU and the first notable chip designed by this group. Apple also commended Inferentia — a chip explicitly crafted for inference — and acknowledged Trainium, which was relatively new at that moment. 

These chips embody Amazon’s traditional strategy: Identify consumer needs, then create an in-house alternative to compete on price. 

Historically, a stumbling block for chips has been the switching costs. Applications designed for Nvidia’s chips require re-architecting to function with others — a lengthy process that dissuades developers from switching.

However, the AWS chip team proudly informed me that Trainium now supports PyTorch, a widely used open-source framework for crafting AI models. This encompasses many models available on Hugging Face, a vast repository where developers share open-source models.

The transition, Carroll mentioned, entails “essentially a one-line alteration, followed by recompiling, and then executing on Trainium.” In other words, Amazon aims to chip away at Nvidia’s market hegemony wherever feasible.

Additionally, AWS recently announced a partnership with Cerebras Systems, incorporating that company’s inference chip on servers equipped with Trainium, promising what Amazon asserts will be enhanced, low-latency AI performance. 

But Amazon’s aspirations extend beyond the chips themselves. It also engineers the servers that house the chips. Beyond the networking elements, this team has developed “Nitro,” a hardware-software combination that delivers virtualization technology (allowing multiple software instances to operate independently on the same server); cutting-edge liquid cooling technology; and the server sleds (depicted below) that accommodate this equipment. 

All of this is to manage cost and performance. 

AWS Austin chip lab tour, sled with components
AWS Austin chip lab tour, sled with components.Image Credits:TechCrunch/Julie Bort

Operating 24/7 on the “bring-up” 

Amazon’s custom chip-designing division originated when the cloud giant acquired Israeli chip designer Annapurna Labs in January 2015 for approximately $350 million. Thus, this group has now been engaged in chip design for AWS for more than a decade. The division has preserved its Annapurna heritage and name — its logo is prominently displayed throughout the office. 

Situated in a sleek, chrome-windowed building in the upscale “The Domain” area of Austin, the chip lab resides in a pedestrian-friendly zone filled with shopping and dining options, often referred to as Austin’s Silicon Valley. 

The office features the quintessential tech corporate ambience: cubicle desks, communal areas, and meeting rooms. However, at the back of a high floor in the building lies the actual lab, providing expansive views of the city.  

The lab itself, packed with shelving and roughly the size of two large conference rooms, is a bustling industrial environment, largely due to the sound of equipped fans. It resembles a blend between a high school workshop and an upscale lab setting from Hollywood, albeit with engineers dressed in casual attire rather than white lab coats.

ASW Chip Lab
AWS Austin Chip Lab.Image Credits:TechCrunch/Julie Bort
ASW Austin chip lab
AWS Austin chip lab.Image Credits:TechCrunch/Julie Bort

It’s important to clarify that this is not where the chips are physically manufactured, so no white hazmat suits were required. The Trainium3 is a cutting-edge 3-nanometer chip, fabricated by TSMC, arguably the foremost entity in 3-nanometer production, with additional chips produced by Marvell. 

However, this is the location where the “bring-up” magic occurs.  

“A silicon bring-up is when you receive the chip for the first time, and it’s like a grand overnight celebration. You stick around, almost like a lock-in,” King describes. After 18 months of development, the chip is activated for the initial verification of its functionality as designed. The team even captured some footage of the Trainium3 bring-up and shared it on YouTube.

Spoiler alert: It’s never without issues.  

For Trainium3, the prototype chip was initially designed for air cooling, similar to previous iterations. The current model, however, employs liquid cooling, which provides energy advantages and represents a substantial engineering achievement.

During the bring-up phase, the specifications for how the chip attached to the air-cooling heat sink were misaligned, preventing the chip from being activated. 

Undeterred, the team “immediately retrieved a grinder and began grinding off the metal,” King recounted. To maintain the festive bring-up pizza party atmosphere, they discreetly took the grinding to a conference room.  

Working through the night to resolve technical challenges “is what silicon bring-up is all about,” King stated. 

The lab even features a welding station, where hardware lab engineer and master welder Isaac Guevara showcased welding small integrated circuit components through a microscope. This is such incredibly challenging work that senior leader Carroll candidly admitted he couldn’t do it, to the laughter of Guevara and the other engineers present. 

ASW Chip tour welding station
AWS Austin chip lab tour, welding station.Image Credits:TechCrunch/Julie Bort

The lab is equipped with both custom-designed and commercially available tools for testing and diagnosing chip issues. Here’s signal engineer Arvind Srinivasan showing how the lab tests each minuscule component on the chip:

AWS Austin chip lab tour, testing equipment
AWS Austin chip lab tour, testing equipment.Image Credits:TechCrunch/Julie Bort

Sleds are the main highlight of the lab 

However, the standout feature of the lab is an entire row displaying each version of the “sleds” the team has engineered. 

AWS Austin chip lab tour wall of sleds
AWS Austin chip lab tour wall of sleds.Image Credits:TechCrunch/Julie Bort

Sleds are the trays that accommodate the Trainium AI chips, Graviton CPU chips, along with supporting boards and components. When stacked on a rack with the networking component, which is also custom-designed by this team, you get the systems that form the core of Anthropic Claude’s success. 

Here’s the sled that was highlighted during the AWS re:Invent conference in December: 

AWS Austin chip lab tour, Tranium3 sled
AWS Austin chip lab tour, Trainium3 sled.Image Credits:TechCrunch/Julie Bort

Validated by Anthropic and OpenAI

I anticipated my guides to boast about the OpenAI agreement throughout the tour. However, they refrained from doing so. 

Their reluctance may have stemmed from the aforementioned legal uncertainties potentially surrounding the deal. Yet, the impression I gathered was that these hands-on engineers (currently working on the next version, Trainium4) have had limited opportunities to collaborate with OpenAI thus far. Their day-to-day focus has primarily been on Anthropic’s and Amazon’s requirements.

At present, a significant proportion of Trainium2 chips is deployed in Project Rainier — among the largest AI compute clusters worldwide — which became operational in late 2025 with 500,000 chips. It is utilized by Anthropic. 

Nevertheless, there was a wall-mounted monitor in the main office displaying a quote about how OpenAI plans to utilize Trainium. The pride was palpable, albeit subtle.  

Alongside this lab, the team also operates its own private data center for quality assurance and testing purposes. Located a short drive away, it does not host customer workloads and is situated at a co-location facility, not an AWS data center.

Security protocols are stringent: There are specific measures required to enter the building and access Amazon’s designated area within it.

The cooling system in the data center is so loud that earplugs are required, and the air is thick with the acrid scent of heated metal. It’s not a conducive environment for an average person to linger in. 

AWS Austin chip lab tour data center
Here’s me and Aronson at the AWS Austin chip lab data center, protecting our ears next to live servers.Image Credits:TechCrunch / Julie Bort

In this data center, rows upon rows of servers house sleds that integrate all of Amazon’s latest custom chips: Graviton CPU, liquid-cooled Trainium3, and Amazon Nitro, all efficiently computing. The liquid runs in a closed-loop system, meaning it is recycled, which should also mitigate environmental impacts, according to the engineers. 

Here’s what a current Trn3 UltraServer looks like: Multiple sleds are stacked above and below, with the Neuron switches positioned centrally. Hardware development engineer David Martinez-Darrow is depicted here performing maintenance on a sled:

AWS Austin chip lab tour data center
AWS Austin chip lab tour data center.Image Credits:TechCrunch/Julie Bort

While the focus on the team has always been high, scrutiny has intensified lately. 

Amazon CEO Andy Jassy closely monitors this lab, proudly boasting about its offerings like a proud parent. In December, he stated that Trainium had already become a multibillion-dollar venture for AWS, labeling it one of the pieces of AWS technology he is most thrilled about. He even highlighted the chip during the announcement of the OpenAI partnership.  

The team feels the pressure as well. Engineers will work around the clock for three to four weeks surrounding each bring-up event to resolve any problems ensuring the chips can be mass-produced and integrated into data centers.

“It’s crucial that we expedite the process to confirm its operational success,” Carroll noted. “Thus far, we’ve been performing exceptionally well.” 

*Disclosure: Amazon covered the airfare and the cost of one night’s stay at a local hotel. True to its Leadership Principle of Frugality, this was a back-of-the-plane middle seat and a modest room. TechCrunch financed the remaining associated travel expenditures such as Ubers and baggage fees. (Yes, I checked a bag for an overnight trip. I can be a bit high maintenance.) 

Are AI tokens simply the latest form of signing bonus, or are they merely an expense of operating?

Are AI tokens simply the latest form of signing bonus, or are they merely an expense of operating?

This week, a topic that has been circulating in Silicon Valley surged into the limelight: AI tokens as remuneration.

The concept is quite clear — instead of providing engineers solely with salaries, equity, and bonuses, companies would additionally offer them a budget of AI tokens, the computational units that drive tools such as Claude, ChatGPT, and Gemini. These can be utilized to operate agents, automate tasks, and process code. The argument is that enhanced access to computing resources increases engineers’ productivity, and that more efficient engineers hold greater value. This is seen as an investment in the individual that possesses them.

Jensen Huang, the leather-jacket-clad CEO of Nvidia, seemed to ignite everyone’s interest when he proposed during the company’s annual GTC event earlier this week that engineers should receive about half of their base salary again — in tokens. According to his calculations, his leading personnel may expend around $250,000 annually in AI compute. He termed it a recruitment strategy and forecasted that it would become customary throughout Silicon Valley.

It is somewhat ambiguous where the concept was initially conceived. Tomasz Tunguz, a notable VC in the Bay Area who operates Theory Ventures with a focus on AI, data, and SaaS startups — and whose insights into data have earned him a dedicated audience over time — was discussing this back in mid-February, stating that tech startups were already incorporating inference costs as a “fourth component to engineering remuneration.” Utilizing information from the compensation tracking platform Levels.fyi, he estimated a top-quartile software engineer’s salary at $375,000. Add $100,000 in tokens and you arrive at $475,000 total compensation — meaning around one dollar in five is now allocated to compute.

This is no accident. Agentic AI has gained significant traction, and the launch of OpenClaw in late January hastened the dialogue considerably. OpenClaw is an open-source AI assistant intended to function continuously — processing tasks, generating sub-agents, and managing a to-do list while its user is asleep. It’s part of a wider shift towards “agentic” AI, referring to systems that take sequences of actions independently over time rather than merely responding to prompts.

As a result, token usage has skyrocketed. Where an individual writing an essay might utilize 10,000 tokens in an afternoon, an engineer managing a fleet of agents can expend millions in a single day — automatically, in the background, without typing anything.

By this weekend, the New York Times compiled an insightful examination of the so-called tokenmaxxing phenomenon, discovering that engineers at firms like Meta and OpenAI are engaging in competition on internal leaderboards tracking token usage. Generous token allocations are discreetly emerging as a standard job benefit, akin to dental insurance or complimentary lunch of previous times. One Ericsson engineer in Stockholm informed the Times that he likely spends more on Claude than he earns in salary, although his employer covers the costs.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Perhaps tokens will indeed establish themselves as the fourth element of engineering remuneration. However, engineers may wish to proceed with caution before accepting this as an unequivocal victory. Increased tokens could imply greater power initially, but with the rapid pace of change, it doesn’t automatically ensure enhanced job stability. One factor is that a substantial token allocation brings with it large expectations. If a company is effectively subsidizing a second engineer’s compute resources on your behalf, the underlying pressure is to achieve results at double the rate (or more).

Moreover, a more complex issue arises: when a company’s token expenditure per employee approaches or surpasses that employee’s salary, the financial rationale for headcount begins to shift in perspective for its finance team. If the compute performs the tasks, the question of how many humans need to be coordinating it becomes increasingly unavoidable.

Jamaal Glenn, a Stanford MBA and former VC turned CFO in financial services based on the East Coast, similarly highlights that what may appear as a benefit can be a strategic approach for companies to exaggerate the perceived value of a compensation package without raising cash or equity — the elements that genuinely accumulate for an employee over time. Your token budget doesn’t vest. It doesn’t increase in value. It doesn’t factor into your forthcoming offer negotiations the way a base salary or equity award does. If companies succeed in normalizing tokens as part of pay, they might find it simpler to maintain flat cash compensation while pointing to an expanding compute budget as proof of investment in their staff.

That’s advantageous for the company. Whether it benefits the engineer hinges on queries most engineers still lack sufficient information to tackle.

It’s been two decades since the inaugural tweet

It’s been two decades since the inaugural tweet

On March 21, 2006, Jack Dorsey shared a straightforward message: “just setting up my twittr”.

That was, of course, the very first message on the platform that continues to be most recognized as Twitter, even after its rebranding to X by its current owner Elon Musk (the acquisition is still being contested in court). X subsequently became part of Musk’s xAI, which in turn has integrated into SpaceX.

Musk significantly reduced the company’s staff and ignited new controversies by integrating xAI’s chatbot Grok, which called itself “MechaHitler” and was exploited to generate extensive sexual deepfakes, including of actual women and children.

Although X holds a robust grip on particular user demographics, including large segments of the tech sector, it also encounters rivalry from platforms like Bluesky and Meta’s Threads. One report indicates that Threads has recently surpassed X in daily mobile users. (All these primarily text-focused platforms are overshadowed by apps like Instagram and TikTok.)

Regarding Dorsey’s initial tweet, the Twitter co-founder subsequently auctioned it as an NFT for $2.9 million. However, its value has purportedly fallen, with the purchaser unable to resell it.

Publisher withdraws horror novel ‘Shy Girl’ due to AI issues

Publisher withdraws horror novel ‘Shy Girl’ due to AI issues

Hachette Book Group announced that it will not be releasing a novel titled “Shy Girl” due to worries that artificial intelligence may have been used to create the content.

The book was set to be launched in the United States this spring. Hachette also stated it would withdraw the book in the United Kingdom, where it is currently on sale. 

While the publisher asserted that the choice followed a comprehensive evaluation of the text, reviewers on GoodReads and YouTube had been conjecturing that the novel was possibly AI-generated. Furthermore, The New York Times reported that it had inquired with Hachette regarding the “Shy Girl” issues the day prior to the announcement.

In a correspondence with the NYT, author Mia Ballard refuted the claim of using AI for her novel, attributing the issue to an acquaintance she had hired to edit the initial, self-published iteration of “Shy Girl.” Ballard mentioned she is seeking legal recourse, asserting that as a consequence of the debacle “my mental health is at an all-time low and my reputation is tarnished for something I didn’t even personally do.”

Writer Lincoln Michel and other analysts in the field have pointed out that U.S. publishers seldom engage in significant editing when they obtain works that have previously been released in other formats.

Why Wall Street remained unimpressed by Nvidia’s major conference

Why Wall Street remained unimpressed by Nvidia’s major conference

When Nvidia’s CEO Jensen Huang presented his annual GTC keynote on Monday, the stock of the $4-trillion corporation began to decline.

It appears Wall Street investors were not swayed by the founder’s optimistic 2.5-hour presentation while dressed in a leather jacket. Instead, they seemed more concerned about the ambiguous future of AI and the potential for a market bubble. The anxiety expressed by Wall Street stands in stark contrast to the vibrant atmosphere in Silicon Valley, where confidence thrives.

Huang spent over two hours discussing the company’s recent advancements, including innovations in video game graphics technology, upgraded networking systems, agreements in the autonomous vehicle sector, and a newly designed chip in collaboration with Groq to enhance AI inference in the Vera Rubin system. He also shared staggering figures regarding Nvidia’s operations and the broader industry. Huang labeled the AI agent ecosystem as a $35 trillion market and designated the physical AI and robotics sector as a $50 trillion market.

He further indicated his expectation of seeing $1 trillion in purchase orders for the company’s Blackwell and Vera Rubin chips — just two among Nvidia’s extensive product lineup — by the end of 2027.

Shouldn’t this be exciting for investors? It’s not surprising that they aren’t, according to Futurum CEO Daniel Neuman who spoke with TechCrunch.

A significant new uncertainty

“[AI] is remarkably effective, transformational, and advancing rapidly, to the point where we don’t truly comprehend its implications for our societal constructs,” Neuman stated. “The markets despise uncertainty. The pace of innovation has introduced a considerable new uncertainty that I believe most individuals did not anticipate.”

Some of this uncertainty arises from misleading information circulating in the market, Neuman noted, adding that reports about low enterprise AI adoption do not reflect the complete reality — at least, not according to his discussions.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“Enterprise AI adoption is poised to hit a tipping point and scale up rapidly,” Neuman said. “I genuinely believe it’s already happening. When you argue it isn’t, I think what you’re likely indicating is that the [return on investment] and the outcomes remain somewhat unclear, and companies are referring to surveys and reports that are primarily based on data that is six months old. It simply takes time to compile the data.”

This viewpoint is supported when examining Nvidia’s past quarter figures. While businesses might not be advertising their AI ROI, they are increasingly investing in Nvidia’s technology. The company not only meets but exceeds its ambitious targets and quarterly projections. Nvidia’s revenue soared by 73% year-over-year last quarter.

There’s no indication this will change anytime soon as evidenced by Nvidia confirming this week that Amazon intends to order 1 million GPUs, along with additional AI infrastructure, by the end of 2027 for Amazon Web Services (AWS), as reported by Reuters.

Kevin Cook, a senior equity strategist at Zacks Investment Research, concurred with Neuman and humorously mentioned to TechCrunch that investor discontent does not alter the fact that the overall stock market relies heavily on Nvidia since its technology supports many of these enterprises.

“The economy is somewhat revolving around Nvidia,” Cook remarked. “It’s establishing this essential infrastructure. Various companies in hardware, software, and physical AI — even Caterpillar is now part of physical AI — are developing upon these platforms.”

This doesn’t imply that an AI bubble does not currently exist or couldn’t arise in the future. However, although GTC may not have positively impacted Nvidia’s stock, the overall uncertainty doesn’t appear to be a challenge for Nvidia. The company is clearly advancing relentlessly, propelling much of the global economy along with it.

“Nvidia, as you know, is a platform company,” Huang stated during his GTC keynote. “We possess technology. We have our platforms. We have a vibrant ecosystem, and today there are probably 100% of the $100 trillion dollars of industry represented here.