
Anthropic presented two sworn statements to a federal court in California late Friday, countering the Pentagon’s claim that the AI firm poses an “unacceptable threat to national security” and asserting that the government’s argument is based on technical misunderstandings and allegations that were never actually articulated during the lengthy negotiations prior to the conflict.
The statements were submitted alongside Anthropic’s reply brief in its lawsuit against the Department of Defense and come just before a hearing scheduled for this Tuesday, March 24, with Judge Rita Lin in San Francisco.
This conflict originated in late February when President Trump and Defense Secretary Pete Hegseth announced their intention to sever ties with Anthropic after the company declined to permit unrestricted military usage of its AI technology.
The declarations were made by Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the firm’s Head of Public Sector.
Heck, a former official from the National Security Council who served at the White House during the Obama administration before transitioning to Stripe and then Anthropic, where she manages government relations and policy initiatives, was present at the February 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and Under Secretary of the Pentagon Emil Michael.
In her statement, Heck highlights what she labels a fundamental falsehood in the government’s documents: that Anthropic sought some form of approval authority over military operations. She emphatically states that this assertion is false. “At no moment during Anthropic’s discussions with the Department did I or any other employee from Anthropic indicate that the company desired such a role,” she wrote.
Additionally, she asserts that the Pentagon’s concern regarding Anthropic potentially disabling or modifying its technology during operations was never brought up in negotiations. Instead, she indicates, this issue arose for the first time in the government’s court submissions, leaving Anthropic without the chance to respond.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
A significant point in Heck’s declaration that is likely to attract attention is that on March 4 — one day after the Pentagon formally established its supply-chain risk determination against Anthropic — Under Secretary Michael sent an email to Amodei indicating that the two parties were “very close” on the issues that the government currently uses to argue Anthropic is a national security threat: its stance on autonomous weapons and mass surveillance of American citizens.
The email, which Heck includes as an exhibit to her declaration, is important to consider alongside Michael’s public statements in the subsequent days. On March 5, Amodei issued a statement declaring that the company had been engaged in “productive discussions” with the Pentagon. The following day, Michael stated on X that “there is no active Department of War discussion with Anthropic.” A week later, he told CNBC that there was “no opportunity” for renewed negotiations.
Heck seems to suggest: If Anthropic’s position on those two matters is what makes it a national security concern, why did the Pentagon’s own official say the two parties were practically aligned on those very issues immediately following the risk designation? (While she refrains from explicitly stating that the government leveraged the designation as a negotiation tactic, the timeline she presents raises the question.)
Ramasamy contributes a different expertise to the matter. Before joining Anthropic in 2025, he worked for six years at Amazon Web Services overseeing AI implementations for government clients, including those in classified settings. At Anthropic, he has been instrumental in assembling the team that integrated its Claude models into national security and defense applications, including the $200 million contract with the Pentagon announced last summer.
His declaration addresses the government’s assertion that Anthropic could potentially interfere with military operations by disabling the technology or changing how it operates, which Ramasamy argues is not technically feasible. According to him, once Claude is deployed within a government-secured, “air-gapped” environment managed by a third-party contractor, Anthropic cannot access it; there is no remote kill switch, no backdoor, and no means to apply unauthorized updates. Any kind of “operational veto” is imaginary, he suggests, clarifying that any modification to the model would require explicit consent and action from the Pentagon.
Ramasamy further contends that Anthropic cannot even see what government users are entering into the system, much less gather that data.
He also challenges the government’s argument that Anthropic’s employment of foreign nationals creates a security risk. He points out that Anthropic employees have gone through U.S. government security clearance assessments — the same vetting process mandated for access to classified information — adding in his declaration that “to my knowledge,” Anthropic is the only AI firm where cleared staff actually developed the AI models intended for classified use.
Anthropic’s lawsuit claims that the supply-chain risk designation — the first ever imposed on an American company — constitutes governmental retaliation for the firm’s publicly voiced opinions on AI safety, in violation of the First Amendment.
The government, in a 40-page document filed earlier this week, rejected that narrative entirely, asserting that Anthropic’s refusal to permit all lawful military applications of its technology was a business choice, not safeguarded speech, and that the designation was simply a national security decision and not punishment for the company’s viewpoints.

