
CEO Sam Altman acknowledged that OpenAI’s agreement with the Department of Defense was “undeniably hurried,” and “the perception isn’t favorable.”
Following the collapse of discussions between Anthropic and the Pentagon on Friday, President Donald Trump instructed federal agencies to cease utilizing Anthropic’s technology after a six-month adjustment window, while Secretary of Defense Pete Hegseth labeled the AI firm a supply-chain risk.
In response, OpenAI swiftly announced it had finalized its own agreement for models to be implemented in classified settings. With Anthropic asserting that it would not allow its technology to be used in fully autonomous weapons or large-scale domestic surveillance, and Altman confirming that OpenAI maintained similar boundaries, several pertinent questions arose: Was OpenAI truthful about its protections? How could it finalize a deal when Anthropic could not?
While OpenAI executives defended the deal on social media, the company also released a blog post detailing its stance.
The post highlighted three domains where OpenAI claimed its models are prohibited — mass domestic surveillance, autonomous weapon systems, and “high-stakes automated decisions (e.g., systems like ‘social credit’).”
The firm declared that unlike other AI entities that have “diminished or eliminated their safety guardrails and relied mainly on usage policies as their principal safeguards in national security applications,” OpenAI’s agreement shields its boundaries “through a more comprehensive, multilayered strategy.”
“We retain complete discretion over our safety framework, we operate through the cloud, authorized OpenAI personnel are involved, and we have robust contractual protections,” the blog stated. “This complements the strong safeguards already present in U.S. law.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The company added, “We are unsure why Anthropic couldn’t secure this agreement, and we hope they and other laboratories will consider it.”
After the blog was released, Techdirt’s Mike Masnick asserted that the agreement “certainly permits domestic surveillance,” as it states that the collection of private information will adhere to Executive Order 12333 (in addition to various other laws). Masnick characterized that order as “how the NSA obscures its domestic surveillance by intercepting communications by accessing lines *outside the US*, even if involving information from/on US citizens.”
In a LinkedIn post, OpenAI’s head of national security partnerships Katrina Mulligan contended that much of the conversation surrounding the contract language presumes “the only barrier between Americans and the utilization of AI for mass domestic surveillance and autonomous weaponry is a singular usage policy clause in a solitary contract with the Department of War.”
“That’s not how any of this functions,” Mulligan stated, adding, “Deployment architecture is more crucial than contract language […] By confining our deployment to cloud APIs, we can guarantee that our models cannot be directly embedded into weapons systems, sensors, or other operational devices.”
Altman also addressed questions regarding the agreement on X, where he acknowledged that it had been rushed and provoked notable backlash against OpenAI (to the point that Anthropic’s Claude surpassed OpenAI’s ChatGPT in Apple’s App Store on Saturday). So why proceed?
“We truly aimed to de-escalate matters, and we believed the deal being offered was favorable,” Altman remarked. “If we are correct and this indeed results in a de-escalation between the DoW and the industry, we will be seen as geniuses, and a company that endured considerable pain to assist the sector. If not, we will continue to be perceived as […] hasty and careless.”

