OpenAI terminates employee for utilizing confidential information in prediction markets.

OpenAI terminates employee for utilizing confidential information in prediction markets.

OpenAI has terminated the employment of an individual due to their involvement in prediction markets, such as Polymarket, as confirmed by the company to Wired. The employee reportedly utilized confidential OpenAI data in relation to the trades conducted, according to the company’s allegations.

OpenAI did not disclose the identity of the individual. Nonetheless, a representative stated that such behavior breached company policy that prohibits employees from leveraging insider information for personal benefit, including on prediction markets.

Platforms like Polymarket and Kalshi offer users the chance to place bets on the outcomes of actual events. For example, on Polymarket, wagers are being placed regarding the types of products OpenAI will unveil in 2026 and the timeline of the company’s public offering. They can encompass any event, and substantial profits can be earned. As previously reported, an accountant secured a $470,300 prize on Kalshi by betting against DOGE enthusiasts.

Prediction markets claim they are not gambling websites and prefer to describe themselves as financial platforms. Kalshi is a regulated exchange and recently imposed fines and bans on a MrBeast editor for similar alleged insider trading activities earlier this week. OpenAI has not promptly replied to a request for further comments.

Pentagon takes steps to classify Anthropic as a supply-chain vulnerability

Pentagon takes steps to classify Anthropic as a supply-chain vulnerability

In a message on Truth Social, President Trump instructed federal agencies to stop utilizing all Anthropic products following the company’s public clash with the Department of Defense. The president permitted a six-month phase-out duration for departments currently employing the products, but stressed that Anthropic was no longer to be considered a federal contractor.

“We don’t require it, we don’t desire it, and will not engage in business with them again,” the president stated in the message.

Interestingly, the president’s message did not include any intentions to classify Anthropic as a supply chain threat, which had previously been noted as a potential outcome. Nevertheless, a later tweet from Secretary of Defense Pete Hegseth fulfilled that implication.

“As part of the President’s order for the Federal Government to halt all utilization of Anthropic’s technology, I am instructing the Department of War to classify Anthropic as a Supply-Chain Risk to National Security,” Secretary Hegseth stated. “Effective immediately, no contractor, supplier, or partner involved with the United States military may engage in any commercial interactions with Anthropic.”

The conflict at the Pentagon revolved around Anthropic’s unwillingness to permit its AI models to be utilized for either extensive domestic surveillance or fully autonomous weaponry, which Secretary Hegseth deemed excessively limiting.

CEO Dario Amodei reaffirmed his position in a public message on Thursday, rejecting any concessions on the two key issues.

“Our strong preference remains to assist the Department and our servicemen and women — with our two requested safeguards intact,” Amodei expressed at that time. “If the Department opts to discontinue using Anthropic, we will strive to facilitate a seamless transition to another provider, ensuring there is no disruption to ongoing military planning, operations, or other essential missions.”

Techcrunch event

Boston, MA
|
June 9, 2026

OpenAI reportedly expressed its backing for Anthropic’s choice. According to the BBC, CEO Sam Altman sent a memo to employees on Thursday asserting that he shared the same “red lines” and that any OpenAI-related defense contracts would also reject applications that were “illegal or inappropriate for cloud deployments, such as domestic surveillance and autonomous offensive weapons.”

OpenAI co-founder Ilya Sutskever, who publicly clashed with Altman in November 2023 and has since launched his own AI firm, also contributed to the dialogue on Friday, posting on X: “It’s extremely positive that Anthropic has not yielded, and it’s noteworthy that OpenAI has adopted a similar position.”

However, within hours of the Trump administration instructing federal agencies to sever connections with Anthropic, OpenAI acted to exploit the gap, announcing a partnership with the Pentagon that Altman claimed upheld the same fundamental principles Anthropic had defended — bans on domestic surveillance and autonomous weaponry.

As reported by the New York Times, OpenAI and the government began discussions about a potential collaboration on Wednesday of this week.

Certainly, more developments are likely to arise.

Anthropic, OpenAI and Google all received contract grants from the U.S. Defense Department last July. While some Google employees have expressed support for Anthropic, Google and its parent organization have not yet responded.

Update: This story has been refreshed with additional reporting.

Musk criticizes OpenAI during deposition, stating ‘no one took their life due to Grok’

Musk criticizes OpenAI during deposition, stating ‘no one took their life due to Grok’

In a recently disclosed deposition related to Elon Musk’s lawsuit against OpenAI, the tech leader criticized OpenAI’s safety protocols, asserting that his own venture, xAI, places a higher emphasis on safety. He notably remarked, “Nobody has taken their own life due to Grok, but evidently, some have because of ChatGPT.”

This statement arose during a questioning session regarding a public letter Musk endorsed in March 2023. In this letter, he urged AI organizations to halt the development of AI systems that exceed the capabilities of GPT-4, OpenAI’s premier model at that time, for a minimum of six months. The letter, which gathered signatures from over 1,100 individuals, including numerous AI specialists, highlighted a lack of sufficient planning and oversight at AI organizations, which are entrenched in an “out-of-control race to create and implement increasingly powerful digital minds that no one — not even their developers — can comprehend, forecast, or reliably manage.”

Concerns have since gained legitimacy. OpenAI is currently facing multiple lawsuits alleging that ChatGPT’s manipulative interaction strategies have caused several individuals to suffer adverse mental health effects, with some resulting in suicide. Musk’s statement hints that these occurrences could be utilized as evidence in his lawsuit against OpenAI.

The transcript of Musk’s video testimony, conducted in September, was made public this week, ahead of the anticipated jury trial next month.

The lawsuit against OpenAI revolves around the organization’s transition from a nonprofit AI research institution to a profit-driven entity, which Musk claims breached its original agreements. As part of his argument, Musk contends that AI safety could be at risk due to OpenAI’s commercial affiliations, as such ties would prioritize speed, scale, and profit over safety considerations.

Nonetheless, since that recording, xAI has encountered its own safety challenges. Last month, Musk’s social platform X was inundated with nonconsensual explicit images generated by xAI’s Grok, some reportedly involving minors. This prompted the California Attorney General’s office to initiate an investigation. The EU is also conducting its own inquiry, with other nations taking measures, including imposing restrictions and bans.

In the recently filed deposition, Musk stated he endorsed the AI safety letter because “it seemed like a good idea,” not merely because he had recently established an AI organization intending to rival OpenAI.

“I signed it, as many individuals did, to advocate for caution in AI development,” Musk expressed. “I simply wanted … AI safety to be prioritized.”

Image Credits:imgflip

Musk also addressed additional inquiries in the deposition, including those regarding artificial general intelligence, or AGI — the idea of AI that can equal or exceed human reasoning across various tasks — stating “it poses a risk.” He further acknowledged that he “was mistaken” about his alleged $100 million contribution to OpenAI; the revised complaint in the case indicates the actual amount is closer to $44.8 million.

He also recounted the rationale behind OpenAI’s founding, which, from his viewpoint, stemmed from his “growing concern about the threat of Google monopolizing AI,” adding that his discussions with Google co-founder Larry Page were “worrisome, as he did not appear to take AI safety seriously.” Musk asserted that OpenAI was established to counter that danger.

Anthropic versus the Pentagon: What’s truly at risk?

Anthropic versus the Pentagon: What’s truly at risk?

The last fortnight has been marked by a confrontation between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth as they contend over the military’s deployment of AI. 

Anthropic is steadfast in its stance that its AI models should not be employed for widespread surveillance of American citizens or for fully autonomous weaponry that executes strikes without human oversight. Concurrently, Secretary Hegseth has contended that the Department of Defense should not be constrained by a vendor’s regulations, asserting that any “lawful use” of the technology ought to be acceptable.

On Thursday, Amodei made it clear that Anthropic will not yield — even amidst threats that his firm might be identified as a supply chain risk as a consequence. However, with the rapid pace of news developments, it’s important to reconsider what is truly at stake in this struggle.

At its essence, this dispute revolves around the question of who has authority over powerful AI systems — the firms that create them or the government that seeks to employ them.

What concerns Anthropic?

As previously mentioned, Anthropic seeks to prevent its AI models from being utilized for the mass surveillance of U.S. citizens or for autonomous weapon systems without human involvement in targeting and firing decisions. Conventional defense contractors usually have limited influence over the application of their products; however, Anthropic has asserted since its founding that AI technology brings forth unique dangers which necessitate distinctive protections. From the company’s viewpoint, the challenge lies in retaining those precautions while the technology is utilized by the military.

The U.S. military already depends on highly automated systems, some of which can be deadly. Historically, the choice to employ lethal force has been the responsibility of human operators, yet there are minimal legal restrictions surrounding the military’s use of autonomous weaponry. The DoD does not impose an outright ban on fully autonomous weapon systems. Per a 2023 DOD directive, AI systems are allowed to identify and engage targets without human involvement, as long as they conform to specific criteria and receive approval from senior defense officials.

This exact scenario is what causes apprehension for Anthropic. Military technology is inherently secretive, so should the U.S. military take steps to automate lethal decision-making, the public might remain unaware until it is in operation. And if Anthropic’s models are utilized, that could be classified as “lawful use.”

Techcrunch event

Boston, MA
|
June 9, 2026

Anthropic’s assertion is not that such applications should be definitively eliminated. Rather, it holds that its models are not yet sufficiently advanced to support them safely. Consider an autonomous system incorrectly identifying a target, escalating a conflict without human consent, or making an instantaneous lethal choice that cannot be undone. When a less-capable AI controls weaponry, the result is a swift, self-assured machine that struggles with making high-stakes decisions.

AI also possesses the potential to significantly enhance lawful surveillance of American citizens to a troubling extent. Currently, U.S. laws permit the surveillance of citizens, whether through the collection of messages, emails, and other forms of communication. AI alters this landscape by facilitating automated large-scale pattern analysis, entity resolution across databases, predictive risk assessment, and ongoing behavioral scrutiny.

What are the Pentagon’s objectives?

The Pentagon contends that it ought to be able to utilize Anthropic’s technology for any lawful application it finds necessary, instead of being restricted by Anthropic’s internal regulations regarding matters such as autonomous weapons or surveillance. 

More specifically, Secretary Hegseth has claimed that the Department of Defense should not be confined by a vendor’s stipulations and that it intends to exercise “lawful use” of the technology.

Sean Parnell, the Pentagon’s chief spokesperson, stated in a Thursday X post that the department has no intention of conducting mass domestic surveillance or deploying autonomous weapons. 

“Here’s what we’re asking: Permit the Pentagon to utilize Anthropic’s model for all lawful purposes,” Parnell expressed. “This is a straightforward, practical request that will prevent Anthropic from jeopardizing vital military operations and potentially endangering our warfighters. We will not allow ANY corporation to dictate the terms of our operational decisions.”

He added that Anthropic has until 5:01 p.m. ET on Friday to reach a decision. “If not, we will rescind our partnership with Anthropic and classify them as a supply chain risk for DOW,” he mentioned.

Despite the DoD’s position that it does not believe it should be limited by a corporation’s usage policies, Secretary Hegseth’s apprehensions regarding Anthropic have occasionally appeared linked to cultural dissatisfaction. In a speech at SpaceX and xAI offices in January, Hegseth passionately criticized “woke AI,” which some saw as a precursor to his conflict with Anthropic.

“Department of War AI will not embody woke ideals,” Hegseth stated. “We are developing combat-ready weapons and systems, not chatbots suited for an Ivy League faculty lounge.”

What comes next?

The Pentagon has threatened to either label Anthropic as a “supply chain risk” — essentially blacklisting Anthropic from government contracts — or to invoke the Defense Production Act (DPA) to compel the company to adapt its model to meet military demands. Hegseth has given Anthropic until 5:01 p.m. on Friday to respond. With the deadline looming, it remains uncertain whether the Pentagon will follow through with its threats.

This is a conflict neither side can easily abandon. Sachin Seth, a venture capitalist at Trousdale Ventures specializing in defense technology, suggests that a supply chain risk designation for Anthropic could mean “lights out” for the company. 

However, he noted that if Anthropic is dismissed from the DoD, it could pose a national security concern.

“[The Department] would have to wait six to 12 months for either OpenAI or xAI to close the gap,” Seth informed TechCrunch. “That leaves a period of up to a year during which they might be operating from an inferior model, namely the second or third best.”

xAI is preparing to become classified-ready and supplant Anthropic, and it’s reasonable to suggest that given owner Elon Musk’s statements about the issue, the company would willingly offer the DoD complete control over its technology. Recent information indicates that OpenAI may adhere to the same limitations as Anthropic.

ChatGPT attains 900M active users weekly

ChatGPT attains 900M active users weekly

OpenAI disclosed on Friday that ChatGPT has achieved 900 million weekly active users, bringing the AI chatbot closer to the milestone of 1 billion. The organization also revealed that it currently boasts 50 million paying subscribers.

“The growth in subscribers has significantly accelerated as the year commenced, with January and February poised to become our largest months ever for new subscribers,” the company stated in a blog entry. “Users turn to ChatGPT for learning, writing, planning, and creation. As usage expands, the product enhances in ways that users notice instantly: quicker responses, improved reliability, enhanced safety, and more consistent performance.”

The updated figure of weekly active users signifies an increase of 100 million from the 800 million users that OpenAI reported in October 2025.

OpenAI disclosed the updated statistics as part of its announcement regarding a $110 billion raise in private funding, highlighting one of the most significant private funding rounds ever. This fresh funding includes a $50 billion investment from Amazon, along with $30 billion each from Nvidia and SoftBank, and is positioned at a $730 billion pre-money valuation. The funding round is still open, with the company anticipating additional investors to come onboard.

AI music creator Suno reaches 2M paying subscribers and $300M in yearly recurring revenue

AI music creator Suno reaches 2M paying subscribers and $300M in yearly recurring revenue

Mikey Shulman, co-founder and CEO of Suno, announced on LinkedIn that the AI music generator has reached 2 million paying subscribers and $300 million in recurring annual revenue.

Just a few months prior, Suno revealed a $250 million funding round that valued the enterprise at $2.45 billion. At that time, Suno informed The Wall Street Journal that annual revenue had reached $200 million — suggesting significant growth for the company in a brief period.

Suno enables users to compose music using natural language prompts, facilitating audio generation with minimal effort for those with little background in music. This development has raised concerns among musicians and record labels, leading to lawsuits against Suno for copyright violations, as its AI system was reportedly trained on pre-existing recorded music. However, Warner Music Group recently resolved its lawsuit and struck a deal that permits Suno to develop models that incorporate licensed music from its collection.

Suno has produced synthetic music that sounds realistic enough to climb the charts on platforms like Spotify and Billboard. Telisha Jones, a 31-year-old from Mississippi, utilized Suno to transform her poetry into the hit R&B song “How Was I Supposed to Know” and subsequently signed a record contract with Hallwood Media in a deal said to be worth $3 million.

Nonetheless, numerous musicians have voiced their opposition to AI’s role in music, including Billie Eilish, Chappell Roan, Katy Perry, among others.

Apple and Netflix collaborate to broadcast Formula 1 Canadian Grand Prix 

Apple and Netflix collaborate to broadcast Formula 1 Canadian Grand Prix 

Apple and Netflix have announced a collaboration to co-broadcast the Formula 1 Canadian Grand Prix, which was revealed on Thursday by Eddy Cue, Apple’s senior vice president of services. For the first time, F1 enthusiasts in the U.S. will have the opportunity to watch the live race concurrently on both Apple TV and Netflix. 

Subscribers to Netflix will have the ability to stream the entire race weekend — encompassing practice, qualifying, and the Grand Prix itself on May 24 — live on the service.

​In addition to live race coverage, this partnership features cross-promotion of Netflix’s popular series, “Drive to Survive.” For the first time, the eighth season — comprising eight episodes that focus on the 2025 Formula One World Championship — will be accessible to both Apple TV subscribers in the U.S. and Netflix users around the world, greatly expanding its audience. 

Season 8 has its premiere today, on February 27. 

The growth of F1 in American culture now transcends television — Brad Pitt’s “F1” has been nominated for Best Picture at this year’s Academy Awards. “Drive to Survive” has successfully drawn in a diverse audience through its behind-the-scenes perspective, evolving it from a conventional sports docuseries into an engaging narrative that has attracted millions of new fans.

The series has been a key element of Apple’s broader ambitions in F1: The company intends to promote the sport across Apple News, Apple Maps (featuring F1 tracks worldwide), Apple Music, and Apple Fitness+, as well as through its physical retail locations.

This partnership also signifies Netflix’s ongoing expansion into live sports broadcasting, transitioning from a “no-sports” policy to acquiring significant rights for NFL Christmas games, WWE Raw, and MLB. 

Techcrunch event

Boston, MA
|
June 9, 2026

Furthermore, this cooperative venture is part of Apple’s new multi-year agreement with Formula 1, through which Apple TV has taken over from ESPN as the exclusive U.S. broadcaster for all 24 races starting this season. The deal is reportedly valued at approximately $150 million each season, a considerable increase from the estimated $85 million ESPN previously paid. All races are accessible to Apple TV subscribers without additional fees. The earlier partnership with ESPN averaged 1.3 million viewers in its final year.

Importantly, Netflix had been previously reported to be seeking U.S. media rights for Formula 1 as far back as 2022. 

Perplexity’s latest Computer represents yet another wager that users require numerous AI models.

Perplexity’s latest Computer represents yet another wager that users require numerous AI models.

Beginning this week, subscribers to Perplexity will gain access to a new autonomous tool.

According to Perplexity Computer, it “integrates all existing AI capabilities into a singular system.” More precisely, Perplexity describes it as a user agent that can carry out intricate workflows autonomously through the use of 19 distinct AI models, even generating subagents for specific tasks.

This tool is currently available, but only for the company’s premium subscription level, the $200/month Perplexity Max. It operates completely in the cloud, which may alleviate some security issues associated with other autonomous tools such as OpenClaw.

While TechCrunch hasn’t conducted a hands-on demonstration of the new tool, example workflows presented on Perplexity’s website show it managing tasks that entail gathering statistics, financial, or legal information; performing analysis; and presenting its results as completed websites or visual representations. 

Last week, Perplexity invited the media to a background briefing with its executives to discuss the product and outline plans for the year. The event was supposed to include a demonstration of the tool, but the company canceled it due to identified issues in the product just hours prior.

This tool signifies Perplexity’s progression, which initially gained attention early in the AI surge by integrating advanced models into user-friendly interfaces, especially its search-engine-like answer service. It later introduced its Comet web browser last summer. An executive remarked that competitors like Google have since adjusted their offerings to resemble those created by Perplexity, viewing it as both a compliment and a potential challenge.

The company is adapting to a changing market: One of the early AI firms to provide advertising, it withdrew from that field late last year, stating last week that it damaged users’ trust in the accuracy of their responses. However, Perplexity’s total user base — numbering in the tens of millions — is significantly smaller compared to OpenAI, which boasts 800 million weekly users and began testing advertisements in ChatGPT this year.

Techcrunch event

Boston, MA
|
June 9, 2026

Currently, Perplexity’s executives claim they are targeting a more niche group of users, with products that cater to individuals making “GDP-impacting decisions.” During the briefing, executives who requested anonymity detailed a focus on enterprise subscriptions, especially for in-depth research.

“We don’t often discuss MAUs because we’re not fundamentally pursuing a strategy to acquire as many users as possible,” one of the executives said.

Perplexity has recently introduced a new benchmark for complex research tasks, known as Draco, where, unsurprisingly, its own deep research offering outperforms rivals like Gemini. 

Perplexity asserts that it is no longer dependent on external companies’ APIs for its web index and has developed its own AI-optimized search API. Nevertheless, the company is committed to packaging cutting-edge models within a consumer-friendly user experience, arguing that there is significant value in orchestrating multiple third-party LLMs to find the most cost-effective and accurate answers to inquiries.

“Multi-model is the future,” one Perplexity executive asserted. They believe models are becoming more specialized rather than commoditized. The company has observed that its users frequently switch between models to achieve their desired results, with queries in December 2025 for visual outputs predominantly directed to Gemini Flash, software engineering tasks conducted with Claude Sonnet 4.5, and medical research utilizing GPT-5.1.

A visual representation of model utilization by Perplexity users over time. Image Credits:Perplexity

If one LLM excels in coding tasks while another is more effective at creating marketing text, Perplexity’s software can automatically select the best one. Another instance noted by executives involves utilizing Perplexity’s own modified open-source LLMs developed in China to answer queries at a lower cost, a tactic for which the company faced criticism last year for not being transparent with its clients. However, when done openly, this method could offer an efficient means to enhance LLM queries.

The company also provides a feature named Model Council, allowing users to query several models simultaneously. However, the economic viability of offering multiple queries at fixed subscription rates remains uncertain. 

Yet, with no costly infrastructure commitments and, as the executives asserted, high margins on user fees, Perplexity is confident it will stay competitive by directing tokens to the most suitable model for each task.

Additionally, there are upcoming developments: the Perplexity Comet browser is set to launch on iOS next month, and the company is organizing a developer conference, Ask, on March 11 in San Francisco to encourage third-party utilization of its API.

One executive mentioned that instead of reviewing the number of queries from the previous day each morning, he now examines the latest revenue figures. Some clients are noticing this shift towards a focus on profitability, with the Perplexity subreddit frequently featuring complaints regarding new rate limits on both free and paid product tiers.

Nevertheless, the executives at the briefing dismissed these concerns: “Any claims regarding the free tier becoming worse or rate-limited are entirely unfounded,” one remarked.

Pokémon Breezes and Pokémon Tides are set to arrive on the Nintendo Switch 2 in 2027

Pokémon Breezes and Pokémon Tides are set to arrive on the Nintendo Switch 2 in 2027

A fresh collection of primary series Pokémon titles is slated for release on the Nintendo Switch 2 in 2027, signifying the 10th entry in the franchise.

During a livestream commemorating the launch of the initial Pokémon games precisely 30 years ago, the company revealed Pokémon Winds and Pokémon Waves, expansive open-world adventures set across a sprawling ocean dotted with islands.

[embedded content]

The trio of new starter Pokémon were also introduced: Browt, a Grass type “bean chick” reminiscent of an Angry Bird; Pombon, a Fire type inspired by Pomeranians who better not transform into a humanoid creature; and Gecqua, a Water type gecko featuring large pink eyes.

Image Credits:The Pokémon Company

Insiders speculate that the upcoming titles take place in a region inspired by Indonesia and Southeast Asia — while the trailer does not verify this definitively, it strongly suggests such possibilities, displaying lush rainforests, coastal mountains, tropical villages, and even coral reefs. Enthusiasts have also focused on a peculiar cloud in the trailer that resembles a Gyarados or Lapras soaring above — could this hint at a new legendary Pokémon?

The Pokémon franchise previously ventured into a tropical island theme with the Hawaii-influenced “Sun and Moon” titles, released nearly a decade ago, yet there remains a wealth of opportunities to discover in that environment, particularly now that the series is embracing open-world gameplay. Not every tropical archipelago is identical!

Typically, main series Pokémon games launch in November, suggesting we have nearly two years until we can delve into this new region … but perhaps we’ll be pleasantly surprised. It feels like ages since Pokémon Scarlet and Violet were released in 2022. However, when that title debuted, players criticized its rushed feel and various glitches. If a postponed release means an outstanding final product, I am more than willing to wait.

Meanwhile, Pokémon aficionados can purchase updated versions of the Pokémon FireRed and LeafGreen games now for the Switch … although they come with a $20 price tag. It’s unfortunate that there isn’t a super simple and legally questionable method to obtain those games on your mobile device for free.

Workers at Google and OpenAI back Anthropic’s Pentagon stance in public letter

Workers at Google and OpenAI back Anthropic’s Pentagon stance in public letter

Anthropic has hit an impasse with the United States Department of War regarding the military’s demand for unrestricted access to the AI firm’s technology. As the Pentagon’s deadline for Anthropic to comply looms closer on Friday afternoon, over 300 employees from Google and more than 60 from OpenAI have endorsed an open letter urging their company leaders to back Anthropic and reject this one-sided usage.

In particular, Anthropic has opposed the deployment of AI for domestic mass surveillance and autonomous weapon systems. The signers of the open letter encourage their employers to “set aside their disagreements and unite” to maintain the limits that Anthropic has put forth.

“They’re attempting to divide each company through fear that the other will yield,” the letter states. “That strategy works only if none of us know the positions of the others.”

The letter explicitly urges leaders at Google and OpenAI to uphold Anthropic’s boundaries against mass surveillance and completely automated weaponry. “We hope our leaders will set aside their differences and join forces to continue to reject the Department of War’s present demands.”

Company leaders have yet to respond formally to the letter. TechCrunch has reached out to Google and OpenAI for their comments.

Nevertheless, unofficial remarks suggest that both firms are sympathetic to Anthropic’s stance. In a Friday morning interview with CNBC, OpenAI CEO Sam Altman stated he doesn’t “personally believe the Pentagon should be threatening DPA against these companies.” A CNN reporter noted that an OpenAI spokesperson confirmed the company agrees with Anthropic’s boundaries against autonomous weapons and mass surveillance.

Google DeepMind has not officially addressed the issue, but Chief Scientist Jeff Dean, seemingly speaking as an individual, did voice his opposition to governmental mass surveillance.

Techcrunch event

Boston, MA
|
June 9, 2026

“Mass surveillance undermines the Fourth Amendment and has a chilling effect on freedom of expression,” Dean posted on X. “Surveillance systems can be misused for political or discriminatory purposes.”

An Axios report indicates that the military is currently allowed to use X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT for unclassified purposes, and has been in discussions with Google and OpenAI to use their technology for classified operations.

While Anthropic maintains an existing partnership with the Pentagon, the AI firm has steadfastly adhered to its stance that its AI should not be employed for either mass domestic surveillance or fully autonomous weaponry.

Defense Secretary Pete Hegseth informed Anthropic CEO Dario Amodei that if his firm does not concede, the Pentagon will either classify Anthropic as a “supply chain risk” or invoke the Defense Production Act (DPA) to compel compliance with military requirements.

In a statement released on Thursday, Amodei reiterated his company’s position. “These latter two threats are fundamentally contradictory: one brands us a security risk; the other brands Claude as vital to national security,” the statement notes. “Regardless, these threats do not alter our stance: we cannot, in good conscience, comply with their request.”