ChatGPT uninstallation rates spiked by 295% following the DoD agreement

ChatGPT uninstallation rates spiked by 295% following the DoD agreement

On Saturday, February 28, uninstalls of ChatGPT’s mobile application in the U.S. soared by 295% compared to the previous day, as users reacted to the news of OpenAI’s agreement with the Department of Defense (DoD), which has been newly branded under the Trump administration as the Department of War.

This information, sourced from market intelligence firm Sensor Tower, indicates a significant rise in contrast to ChatGPT’s average day-over-day uninstall rate of 9% over the last month.

Conversely, downloads for Anthropic’s Claude, a rival of OpenAI, surged by 37% day-over-day on Friday, February 27, and saw a 51% increase by Saturday, February 28, following the company’s announcement that it would refrain from partnering with the U.S. defense department. Anthropic expressed that they could not finalize the deal due to apprehensions that AI might be employed to monitor Americans and potentially control fully autonomous weaponry, which is not yet deemed safe.

Data suggests that a segment of consumers appeared to support Anthropic’s stance on the issue.

Furthermore, ChatGPT’s download enhancements were affected by the announcement of its DoD collaboration, with U.S. downloads declining by 13% day-over-day on Saturday, shortly after the partnership was made public. This downward trend continued into Sunday, when downloads fell by an additional 5% day-over-day. (Prior to the announcement, the app had recorded a 14% increase in downloads on Friday.)

These swift changes were also evident in Claude’s App Store ranking, as it reached No. 1 on the U.S. App Store on Saturday, maintaining that position as of Monday, March 2. This marks an increase of over 20 ranks compared to about a week prior (February 22, 2026).

Users are actively expressing their sentiments regarding OpenAI’s agreement through the app’s ratings, with 1-star reviews for ChatGPT bouncing 775% on Saturday, then rising by 100% day-over-day on Sunday, according to Sensor Tower. During the same timeframe, 5-star reviews saw a 50% drop.

Other third-party data providers corroborate Sensor Tower’s results.

For example, Appfigures highlighted that Claude’s total daily downloads in the U.S. on Saturday exceeded those of ChatGPT for the first time. Their estimates indicated that Claude’s U.S. downloads surged by 88% day-over-day on Saturday.

Image Credits:Appfigures

Appfigures also mentioned that Claude is currently the No. 1 free iPhone application in six countries beyond the United States, specifically Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland.

A third market intelligence firm, Similarweb, stated that Claude’s U.S. downloads during the previous week were approximately 20 times higher than in January, although this may be attributed to factors beyond just the political context, it cautioned.

Stripe aims to convert your AI expenses into a profit hub

Stripe aims to convert your AI expenses into a profit hub

On Monday, Stripe unveiled a preview of a new capability aimed at assisting AI startups (and other businesses) in addressing the challenge of transmitting the foundational costs of AI model utilization to their clients.

Nevertheless, Stripe’s functionality extends beyond merely passing on token costs. It enables startups to impose a markup percentage on token utilization. For example, a company could automatically add a 30% charge above the token costs that the startup pays to the model provider.

According to Stripe, “Imagine you’re developing an AI application: you require a steady 30% margin on raw LLM token expenses across various providers. Billing streamlines this process.”

This billing feature permits the startup to select the AI models they employ. It monitors the API costs of those models and subsequently logs the customers’ token usage, applying the profit-margin markup effortlessly.

As previously reported, AI startups utilize various methods for monetizing their offerings. Numerous ones adopt tiered monthly subscription models with usage-rate limits; once those thresholds are reached, subscribers may incur additional charges for exceeding the cap.

For example, Cursor modified the pricing on some tiers last year from unlimited access to rate-limited access, adding fees for extra usage.

In the absence of a usage threshold, users could incur substantial bills with the model providers, potentially pushing the startup into deficit. This scenario is particularly pressing for agentic startups. The more their clients engage with their agents, the more tokens they deplete from the foundational model provider, whether that’s OpenAI, Google Gemini, Anthropic, or others—rendering pricing and business model choices exceptionally pivotal.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Stripe has also launched its own AI gateway, a tool that provides users with access to various models, allowing them to select the most suitable one for the task. However, the billing tool is also compatible with third-party gateways that are already well-known, such as those from Vercel and OpenRouter, as mentioned in a tweet by a Stripe product manager.

Naturally, there are other startups providing AI model cost management functionalities through their own gateways. For instance, OpenRouter, which offers entry to over 300 models, applies a flat 5.5% markup on token fees for its initial tier, and it also includes budget controls.

According to Stripe’s product manager on X, Stripe is not currently imposing its own markup on the gateway. This feature is still in waitlist mode. Regardless, if Stripe can assist startups in effectively transforming the tracking and billing of this expense into a profit-generating opportunity, it might prove to be transformative. Stripe did not immediately respond to a request for clarification on the general availability of this feature.

Geopolitical tensions are said to hinder the IPO of PayPay, which is backed by SoftBank.

Geopolitical tensions are said to hinder the IPO of PayPay, which is backed by SoftBank.

Japan’s foremost mobile payment platform, PayPay, has seemingly delayed its U.S. initial public offering amid market instability and ongoing conflicts in the Middle East.

The firm intended to announce its IPO price range on Monday, March 2. PayPay was targeting a valuation of no less than ¥1.5 trillion ($10 billion), according to Bloomberg.

Established in 2018 as a collaboration between SoftBank and Yahoo Japan, with technical input from India’s Paytm, PayPay saw Paytm divest its remaining shares to SoftBank for around $279 million in late 2024.

Although 2026 began with optimistic aspirations for technology IPOs, numerous firms have either retracted or postponed their listing ambitions following a downturn in software stocks, spurred by concerns that AI may eventually make conventional software redundant. The markets have also been unsettled by U.S. military actions against Iran and the ensuing turmoil in adjacent nations.

In January, Motive Technologies, supported by Kleiner Perkins and known for its dashboard cameras designed for long-distance trucks, deferred its IPO, as reported by The Information. Furthermore, Clear Street, a tech brokerage, withdrew its IPO intentions last month.

While the landscape for smaller public listings is currently stagnant, investors are still looking forward to three potential “mega-IPOs” in 2026: SpaceX, OpenAI, and Anthropic.

Nobody possesses an effective strategy regarding how AI firms ought to collaborate with the government.

Nobody possesses an effective strategy regarding how AI firms ought to collaborate with the government.

On Saturday evening, Sam Altman found out just how challenging it is to work with the U.S. government. Around 7 p.m., the CEO of OpenAI stated he would be answering questions publicly on X, aiming to clarify his company’s choice to take on the Pentagon contract that Anthropic had just abandoned. 

Most inquiries centered on OpenAI’s readiness to engage in mass surveillance and automated killing — precisely the activities that Anthropic had dismissed during its discussions with the Pentagon. Altman generally deferred to the public sector, asserting it wasn’t his responsibility to dictate national policy.

“I genuinely believe in the democratic process,” he remarked in one reply, “and that our elected officials hold the power, and that we must all uphold the constitution.”

An hour later, he expressed surprise at the number of individuals who appeared to disagree. “There is more open debate than I expected,” Altman stated, “regarding whether we should favor a democratically elected government or unelected private companies having more influence. I guess this is a point of contention for many.”

This moment reflects significant implications for both OpenAI and the broader tech industry. In his Q&A, Altman adopted a position typically seen in the defense sector, where military leaders and industry associates are expected to submit to civilian authority. 

Yet, what stands out more is that as OpenAI shifts from being a highly successful consumer startup to becoming a component of national security infrastructure, the company seems ill-prepared to handle its emerging responsibilities.

Altman’s public town hall occurred at a critical juncture for his organization. The Pentagon had just barred OpenAI competitor Anthropic for insisting on contractual restrictions regarding surveillance and automated weapon systems. Just hours later, OpenAI revealed it had secured the same contract that Anthropic had relinquished. Altman framed the agreement as a swift resolve to the conflict — and it was undoubtedly a lucrative opportunity. However, he appeared caught off guard by the backlash it incited from both the company’s users and staff.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

OpenAI has been collaborating with the U.S. government for a significant time — but not like this. For example, when Altman was advocating to Congressional committees in 2023, he primarily adhered to the social media strategy. He was exuberant about the company’s world-altering potential while acknowledging the dangers and actively engaging with lawmakers — a winning mix for attracting investors while preempting regulation.

Almost three years later, that strategy is no longer viable. The power of AI is apparent, and the capital requirements are so substantial that deeper engagement with the government is unavoidable. The astonishing part is how unprepared both parties appear to be for it.

The most pressing conflict arises from Anthropic itself, and U.S. Defense Secretary Pete Hegseth’s announced plan on Friday to label the lab as a supply-chain risk. That warning casts a shadow over the entire discussion like an unfired cannon. As former Trump official Dean Ball noted over the weekend, this designation would sever Anthropic from hardware and hosting partners, essentially crippling the company. It would mark an unprecedented action against an American firm, and although it may eventually be overturned in court, it will inflict damage in the meantime and send tremors through the industry.

As Ball explains the situation, Anthropic was fulfilling an existing contract under terms that had been set years ago — only for the administration to demand a revision of the terms. This far exceeds anything that would be acceptable between private enterprises and delivers a chilling signal to other suppliers.

“Even if Secretary Hegseth retracts and limits his extremely broad threat against Anthropic, significant harm has already been inflicted,” Ball wrote. “Most corporations, political figures, and others will have to proceed under the assumption that tribal logic will now prevail.”

This poses a direct threat to Anthropic, but it also represents a major challenge for OpenAI. The company is already feeling intense pressure from its workforce to maintain some form of a boundary. Simultaneously, right-wing media will be vigilant for any indication that OpenAI is not a steadfast political ally. Amid all of this is the Trump administration, doing everything in its power to complicate the situation.

One could argue that OpenAI did not intend to become a defense contractor, yet due to its ambitious goals, it has been forced to engage in the same arena as Palantir and Anduril. Gaining ground during the Trump administration means making choices. There are no neutral players here, and gaining some supporters will mean alienating others. It remains uncertain how significant a cost OpenAI will encounter, whether in lost business or employee attrition, but it is improbable that it will come through unscathed.

It might seem unusual that this crackdown is occurring at a time when more significant tech investors hold prominent positions in Washington than ever before, yet most appear entirely content with tribal reasoning. Among Trump-aligned venture capitalists, Anthropic has long been viewed as courting the Biden administration in ways that could harm the broader industry — a viewpoint intensified by Trump advisor David Sacks’ response to the unfolding conflict. With the roles now reversed, few appear inclined to advocate for the broader principle of free enterprise.

This is a tough position for any organization to occupy — and while politically aligned players may reap short-term benefits, they will equally be exposed when political dynamics inevitably change. There’s a reason why, for decades, the defense sector was dominated by slow-moving, heavily regulated conglomerates like Raytheon and Lockheed Martin. Operating as an industrial wing of the Pentagon afforded them the political protection necessary to sidestep politics, concentrating on technology without needing to reset with each new administration.

Today’s startup rivals may be quicker than their predecessors — but they are far less ready for the long haul.

A new application notifies you when someone in proximity is using smart glasses.

A new application notifies you when someone in proximity is using smart glasses.

One of the primary concerns surrounding “luxury surveillance” gadgets, such as smart glasses equipped with embedded video recording cameras, is that they frequently resemble ordinary eyewear, which means you could be filmed without your awareness.

However, there is now an application that can identify and notify you when someone in your vicinity is utilizing smart glasses, or potentially other always-on recording devices.

The Android application, aptly titled Nearby Glasses, continuously scans for signals emitted from Bluetooth-capable technologies, including wearable gadgets manufactured by Meta (and Oakley) and Snap.

The app emerges at a time when there is growing opposition to devices that record or listen continuously, which critics argue collect data about individuals in proximity without their consent. 

Yves Jeanrenaud, the creator of the app, initially shared details about the project with 404 Media and mentioned he was partially motivated to develop Nearby Glasses after examining the independent publication’s investigations into wearable surveillance devices, including instances where Meta’s Ray-Ban smart glasses have been utilized in immigration enforcement actions and to document and intimidate sex workers.

On the project’s page, Jeanrenaud characterized smart glasses as an “unacceptable invasion, neglectful of consent, dreadful piece of technology.”

Jeanrenaud communicated to TechCrunch through email that his motivation stemmed from “observing the vast scale and inhumane aspects of the exploitation tied to these smart glasses.” He also referenced Meta’s choice to make face recognition a standard feature in its smart glasses, “which I believe opens the floodgates to all sorts of privacy-invasive actions.”

The app functions by listening for nearby Bluetooth signals that carry a publicly assigned identifier unique to the manufacturer of the Bluetooth device. If the app identifies a Bluetooth signal from a nearby hardware product made by Meta or Snap, it will send a notification to the user. (The app also permits users to input their own specific Bluetooth identifiers, enabling the detection of a wider array of wearable surveillance technologies.)

side-by-side screenshots showing the Nearby Glasses app working, with a phone notification alerting the user that there's a nearby glasses wearer.
ScreenshotImage Credits:Yves Jeanrenaud

Jeanrenaud noted that the app might occasionally lead to false alarms. This implies that the app could detect a nearby virtual reality headset produced by Meta and notify the user under the assumption that it is a pair of smart glasses created by the same manufacturer. However, virtual reality headsets are typically larger and more obvious to identify.

To test this, I downloaded the app onto an Android device and wandered through my city’s neighborhood, and to my astonishment, I did not encounter any smart glasses users and received no alerts.

Nevertheless, since the app supports it, I entered a specific Bluetooth identifier (0x004C), which enabled me to seek out nearby devices made by Apple — and my testing device was instantly inundated with alerts (as expected), likely registering every Apple-produced device within my vicinity. 

This confirmed that the app operates as intended.

Jeanrenaud is in the process of adding new functionalities and mentioned that there is interest in an iPhone version of the app, but this hinges on his available time and resources.

Regarding the app, Jeanrenaud stated: “Undoubtedly, it serves as a technical solution to a societal issue (which is exacerbated by technology), and it won’t dissipate in the near future,” describing the app as a “desperate measure of resistance, with hopes that it would assist at least someone.”

Representatives for Meta and Snap did not respond to TechCrunch’s inquiries for comments.

Instagram monitored increasing usage while focusing on teenagers, lawyers contend

Instagram monitored increasing usage while focusing on teenagers, lawyers contend

Instagram monitored the amount of time users engaged with its application, with company leaders highlighting “milestones” that the app achieved over the years. Daily usage of the app increased from 40 minutes per day in 2023 to 46 minutes per day in 2026, according to documents that surfaced during CEO Mark Zuckerberg’s testimony in a state court case that occurred in Los Angeles County Superior Court in February.

The emphasis on time-spent statistics is a crucial element of the lawsuit, representing one of Zuckerberg’s rare moments in front of a jury.

In K.G.M. v. Platforms et al., currently in progress in L.A. County’s Superior Court, a jury is tasked with determining whether social media companies bear responsibility for mental health issues among youth linked to their platforms or their addictive features. Snap and TikTok reached settlements before the trial commenced, while executives from other defendants, Meta and YouTube, are testifying as part of the proceedings.

The 19-year-old plaintiff, identified by the initials K.G.M. or “Kaley,” asserts that engaging with social media at an early age had a detrimental effect on her mental health, leading to an addiction to technology and the development of depression, including suicidal thoughts.

Meta contests that its app is to blame for Kaley’s issues.

“The jury in Los Angeles must consider whether Instagram was a significant factor in the plaintiff’s mental health challenges. The evidence will indicate that she encountered numerous serious, challenging obstacles long before she ever used social media,” stated Meta spokesperson Stephanie Otway in an emailed comment regarding the case.

Legal representatives for the plaintiffs aim to demonstrate that Meta established internal goals to enhance the amount of time users spent on Instagram, despite being aware of underage individuals on the platform. During Zuckerberg’s testimony, he was questioned on why he informed Congress in 2024 that children under 13 were prohibited from using Instagram, when internal records revealed that the company was aware of around 4 million children under 13 on the app as early as 2015. The document also indicated that this number represented 30% of all 10- to 12-year-olds in the U.S.

Zuckerberg defended himself against this line of questioning, asserting that he responded to Congress truthfully regarding the company’s policy, and mentioned that Instagram eliminated underage users it discovered. He also sought to clarify that the “milestones” the company tracked were not equivalent to specific “goals” set for Instagram’s team to accomplish.

However, additional documents cited by the plaintiff’s legal team during his testimony illustrated Instagram’s increasing focus on the tween and teenage demographics, with emails from a former product manager stating, “Our overall company goal is total teen time spent,” and that “Mark has determined that the company’s top priority for the first half of 2017 is teens.” Another market analysis in December 2018 revealed that tweens constituted the “highest retention age group” in the U.S., implying the company’s interest in this demographic.

Another email from Zuckerberg adviser Nick Clegg, who departed the company last year, highlighted that Instagram’s age restrictions were essentially “unenforceable.”

Despite awareness of underage users on its platform, Instagram did not take measures to address its existing underage users until August 2021, when it started requiring users to input their birth dates, the plaintiff’s lawyers contended. (Meta responded that it began requesting ages at the time of sign-up in 2019 for new users, however.)

While Instagram has recently implemented a series of protections for teens and parental controls, its targeting of the younger demographic persists. Other internal documents referenced in this testimony indicated that Meta’s current ambition is for Instagram to be the largest platform for teens by monthly active users both in the U.S. and globally this year.

If you or someone you care about is contemplating suicide or needs to talk, there are people ready to help. Call or text 988 to reach the National Suicide Prevention Lifeline.

This article was amended after publication to clarify that this is not Zuckerberg’s first instance before a jury, as he previously participated in a trial focused on Meta’s VR technology.

Users are abandoning ChatGPT for Claude — here's how to transition.

Users are abandoning ChatGPT for Claude — here’s how to transition.

Numerous users are migrating to Claude amidst a series of controversies connected to ChatGPT and its parent organization, OpenAI. 

The pivotal moment arose when Anthropic, the firm behind Claude, declined to permit the Department of Defense to utilize its AI models for extensive domestic surveillance or entirely autonomous weaponry. In retaliation, President Trump instructed all federal agencies to cease using Anthropic’s products, and Defense Secretary Pete Hegseth disclosed intentions to label the company a supply-chain risk. 

Shortly thereafter, OpenAI revealed its own arrangement with the Pentagon, asserting to incorporate safeguards, yet the agreement has ignited extensive discussions regarding privacy and the ethical application of AI.

Consequently, Claude has ascended to the pinnacle of the free app rankings in Apple’s U.S. App Store, surpassing ChatGPT. According to Anthropic, daily registrations have reached unprecedented levels, free users have surged by over 60% since January, and the number of paid subscribers has more than doubled this year. 

For a significant number of users, the recent issues have rendered Claude an attractive substitute for ChatGPT. If you’re contemplating a switch, this guide will assist you in transferring your data and closing your ChatGPT account.

How to export your data from ChatGPT

Ending your relationship with ChatGPT shouldn’t entail losing years of digital memories. Rather than beginning anew with a different AI assistant, you can transfer your data to Claude so it can immediately understand your preferences.

There are several methods to achieve this. One initial step is to access Settings. From there, navigate to Personalization and locate the Memory section. Select “Manage” and review your stored data, making any necessary updates that no longer represent your preferences accurately. Once everything is current, copy the information you wish to retain. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Alternatively, you may export your complete chat history. Go to Settings, select Data Controls, and click on “Export Data.” ChatGPT will compile your chat records into text or JSON files and send them to you via email. (Note that this could take some time if you possess a large history.)

Image Credits:Screenshot of ChatGPT Settings by TechCrunch

You may also opt for a manual approach by copying important conversations from your history or requesting ChatGPT to summarize your primary preferences, frequently mentioned topics, and any custom instructions you utilize.

Import your data into Claude

[embedded content]

After you’ve compiled your data, moving it to Claude is simple. 

Open Claude, navigate to Settings, then Capabilities, and ensure Memory is activated. Next, start a new conversation and prompt something like, “Here’s some vital context I’d like you to remember. Update your memory about me with this.” Then paste your details or summaries directly into the chat. 

For exported chat files, avoid pasting raw logs. Instead, instruct Claude with something along the lines of: “Review this and summarize my key preferences.”

We also advise confirming with Claude that your data has been accurately stored. You can always adjust your preferences as they evolve.

Permanently delete your ChatGPT account

To fully sever ties with ChatGPT, merely canceling your subscription isn’t sufficient to erase your data. 

Here’s what you need to do:

  • Access Settings, then Personalization, and select Memory.
  • Eliminate any stored memories or personalization settings.
  • For added peace of mind, type “Delete all my memory and personalized data” as a final command in chat.
  • Then head to your account management settings to delete your account completely.

This story has been updated to clarify the claim that Claude’s memory feature is exclusively for paid subscribers. It is available to free users as well.

Paramount+ and HBO Max will combine into a single streaming service following the completion of the WBD agreement.

Paramount+ and HBO Max will combine into a single streaming service following the completion of the WBD agreement.

In light of the unexpected announcement that Netflix had pulled its offer to buy Warner Bros. Discovery (WBD), Paramount Skydance has stepped in to acquire the company. On Monday, CEO David Ellison shared during an investor call that the plan is to integrate Paramount+ and HBO Max into one cohesive platform. 

“The new company will be the home for many of the most iconic and cherished franchises globally, ranging from ‘Harry Potter’ to ‘Top Gun,’ ‘Star Trek’ to ‘Looney Tunes,’ ‘Game of Thrones’ to ‘Yellowstone.’ This provides a significant opportunity, and we are determined to fuel the creative capabilities of both studios, positioning them as the prime destination for the industry’s top creative talent,” Ellison stated during the call.

Ellison further assured investors that HBO’s brand and creative direction as a studio would remain intact, declaring, “Our belief is HBO should continue to be HBO.” He also promised to sustain a strong theatrical lineup, with a commitment to producing 15 films each year for each studio, resulting in at least 30 theatrical releases annually.

This news follows Paramount’s recent deal to acquire WBD in an arrangement valued at $110 billion. The merger will unite a wide range of film, TV, and news assets under a single corporate roof and is anticipated to significantly alter the Hollywood landscape as it currently exists. It also continues the trend of consolidation among other leading streaming services, similar to the merger of Disney+ and Hulu.

With a forecasted subscriber count exceeding 200 million, the new streaming service will be positioned as a major competitor amongst the leading streaming platforms.

Nonetheless, the merger faces intense scrutiny from the U.S. Department of Justice due to concerns about media consolidation and competition in the market. Recently, California Attorney General Rob Bonta pledged to thoroughly review the acquisition. 

Moreover, industry experts caution that the merger could lead to considerable job reductions, raising employee worries regarding layoffs and salary cuts. There are also apprehensions about editorial independence, particularly considering the Ellison family’s political ties to Donald Trump and heightened scrutiny over the newsrooms at CBS and CNN.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Ellison expressed optimism that the merger would proceed seamlessly. He characterized the acquisition as “pro-competition, pro-consumer, and pro-creative community,” highlighting that it will “forge a more robust Hollywood and global production landscape, one that enhances consumer options and opens doors for creative talent,” he concluded.

Tech employees call on the DOD and Congress to remove the Anthropic designation as a supply-chain threat.

Tech employees call on the DOD and Congress to remove the Anthropic designation as a supply-chain threat.

Numerous technology professionals have endorsed an open letter requesting the Department of Defense to retract its classification of Anthropic as a “supply-chain risk.” The letter further urges Congress to intervene and “assess if employing these extraordinary powers against an American tech firm is suitable.”

This letter features signatures from prominent technology and venture capital entities, including OpenAI, Slack, IBM, Cursor, Salesforce Ventures, among others. It comes in response to a conflict between the DOD and Anthropic after the AI lab declined to provide the military unrestricted access to its AI systems last week. 

Anthropic’s two non-negotiable terms in its discussions with the Pentagon were that it did not wish for its technology to be utilized for mass surveillance of U.S. citizens or to fuel autonomous weapons able to target and fire without human involvement. The DOD asserted it had no intentions of undertaking either action but claimed it should not be bound by vendor regulations. 

After Anthropic CEO Dario Amodei opted not to reach a consensus with Defense Secretary Pete Hegseth, President Donald Trump instructed federal agencies on Friday to cease using Anthropic’s technology following a six-month transition window. Subsequently, Hegseth aimed to classify Anthropic as a supply-chain risk — a label typically reserved for foreign threats that would prohibit the AI company from collaborating with any agency or firm that engages with the Pentagon. 

In a statement on Friday, Hegseth declared: “Effective immediately, no contractor, supplier, or partner conducting business with the United States military may engage in any commercial operations with Anthropic.” 

However, a statement on X does not inherently categorize Anthropic as a supply-chain risk. The government must finalize a risk evaluation and inform Congress before military affiliates can sever relationships with Anthropic or its products. Anthropic expressed in a blog post that the classification is “legally unsound” and that it plans to “contest any supply chain risk designation in court.”

Many industry insiders view the administration’s actions towards Anthropic as severe and indicative of retribution. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“When two parties cannot find common ground, the usual approach is to part ways and collaborate with a competitor,” the open letter states. “This predicament creates a troubling precedent. Penalizing an American corporation for refusing to accept modifications to a contract sends a distinct message to every tech company in America: comply with whatever terms the government insists upon, or risk retaliation.” 

In addition to worries about the government’s severe handling of Anthropic, many in the sector remain apprehensive regarding potential overreach by the government and the misuse of AI for malicious ends. 

Boaz Barak, a researcher at OpenAI, stated in a social media update on Monday that preventing governments from utilizing AI for mass domestic surveillance is also his “personal boundary” and “it should be everyone’s.”

Immediately following Trump’s public criticism of Anthropic, OpenAI revealed it had secured an agreement for its models to be used in the DOD’s classified settings. OpenAI CEO Sam Altman mentioned last week that the company shares the same red lines as Anthropic.

“If there is a silver lining to the events of last week, it would be that we in the AI field begin to regard the issue of leveraging AI for government misconduct and surveilling its citizens as a significant risk in its own right,” Barak wrote. “We have done a commendable job assessing, mitigating, and establishing processes for risks such as bioweapons and cybersecurity. Let’s apply similar methods here.”