Tech employees call on the DOD and Congress to remove the Anthropic designation as a supply-chain threat.

Tech employees call on the DOD and Congress to remove the Anthropic designation as a supply-chain threat.

Numerous technology professionals have endorsed an open letter requesting the Department of Defense to retract its classification of Anthropic as a “supply-chain risk.” The letter further urges Congress to intervene and “assess if employing these extraordinary powers against an American tech firm is suitable.”

This letter features signatures from prominent technology and venture capital entities, including OpenAI, Slack, IBM, Cursor, Salesforce Ventures, among others. It comes in response to a conflict between the DOD and Anthropic after the AI lab declined to provide the military unrestricted access to its AI systems last week. 

Anthropic’s two non-negotiable terms in its discussions with the Pentagon were that it did not wish for its technology to be utilized for mass surveillance of U.S. citizens or to fuel autonomous weapons able to target and fire without human involvement. The DOD asserted it had no intentions of undertaking either action but claimed it should not be bound by vendor regulations. 

After Anthropic CEO Dario Amodei opted not to reach a consensus with Defense Secretary Pete Hegseth, President Donald Trump instructed federal agencies on Friday to cease using Anthropic’s technology following a six-month transition window. Subsequently, Hegseth aimed to classify Anthropic as a supply-chain risk — a label typically reserved for foreign threats that would prohibit the AI company from collaborating with any agency or firm that engages with the Pentagon. 

In a statement on Friday, Hegseth declared: “Effective immediately, no contractor, supplier, or partner conducting business with the United States military may engage in any commercial operations with Anthropic.” 

However, a statement on X does not inherently categorize Anthropic as a supply-chain risk. The government must finalize a risk evaluation and inform Congress before military affiliates can sever relationships with Anthropic or its products. Anthropic expressed in a blog post that the classification is “legally unsound” and that it plans to “contest any supply chain risk designation in court.”

Many industry insiders view the administration’s actions towards Anthropic as severe and indicative of retribution. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“When two parties cannot find common ground, the usual approach is to part ways and collaborate with a competitor,” the open letter states. “This predicament creates a troubling precedent. Penalizing an American corporation for refusing to accept modifications to a contract sends a distinct message to every tech company in America: comply with whatever terms the government insists upon, or risk retaliation.” 

In addition to worries about the government’s severe handling of Anthropic, many in the sector remain apprehensive regarding potential overreach by the government and the misuse of AI for malicious ends. 

Boaz Barak, a researcher at OpenAI, stated in a social media update on Monday that preventing governments from utilizing AI for mass domestic surveillance is also his “personal boundary” and “it should be everyone’s.”

Immediately following Trump’s public criticism of Anthropic, OpenAI revealed it had secured an agreement for its models to be used in the DOD’s classified settings. OpenAI CEO Sam Altman mentioned last week that the company shares the same red lines as Anthropic.

“If there is a silver lining to the events of last week, it would be that we in the AI field begin to regard the issue of leveraging AI for government misconduct and surveilling its citizens as a significant risk in its own right,” Barak wrote. “We have done a commendable job assessing, mitigating, and establishing processes for risks such as bioweapons and cybersecurity. Let’s apply similar methods here.”