The last fortnight has been marked by a confrontation between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth as they contend over the military’s deployment of AI.
Anthropic is steadfast in its stance that its AI models should not be employed for widespread surveillance of American citizens or for fully autonomous weaponry that executes strikes without human oversight. Concurrently, Secretary Hegseth has contended that the Department of Defense should not be constrained by a vendor’s regulations, asserting that any “lawful use” of the technology ought to be acceptable.
On Thursday, Amodei made it clear that Anthropic will not yield — even amidst threats that his firm might be identified as a supply chain risk as a consequence. However, with the rapid pace of news developments, it’s important to reconsider what is truly at stake in this struggle.
At its essence, this dispute revolves around the question of who has authority over powerful AI systems — the firms that create them or the government that seeks to employ them.
What concerns Anthropic?
As previously mentioned, Anthropic seeks to prevent its AI models from being utilized for the mass surveillance of U.S. citizens or for autonomous weapon systems without human involvement in targeting and firing decisions. Conventional defense contractors usually have limited influence over the application of their products; however, Anthropic has asserted since its founding that AI technology brings forth unique dangers which necessitate distinctive protections. From the company’s viewpoint, the challenge lies in retaining those precautions while the technology is utilized by the military.
The U.S. military already depends on highly automated systems, some of which can be deadly. Historically, the choice to employ lethal force has been the responsibility of human operators, yet there are minimal legal restrictions surrounding the military’s use of autonomous weaponry. The DoD does not impose an outright ban on fully autonomous weapon systems. Per a 2023 DOD directive, AI systems are allowed to identify and engage targets without human involvement, as long as they conform to specific criteria and receive approval from senior defense officials.
This exact scenario is what causes apprehension for Anthropic. Military technology is inherently secretive, so should the U.S. military take steps to automate lethal decision-making, the public might remain unaware until it is in operation. And if Anthropic’s models are utilized, that could be classified as “lawful use.”
Techcrunch event
Boston, MA
|
June 9, 2026
Anthropic’s assertion is not that such applications should be definitively eliminated. Rather, it holds that its models are not yet sufficiently advanced to support them safely. Consider an autonomous system incorrectly identifying a target, escalating a conflict without human consent, or making an instantaneous lethal choice that cannot be undone. When a less-capable AI controls weaponry, the result is a swift, self-assured machine that struggles with making high-stakes decisions.
AI also possesses the potential to significantly enhance lawful surveillance of American citizens to a troubling extent. Currently, U.S. laws permit the surveillance of citizens, whether through the collection of messages, emails, and other forms of communication. AI alters this landscape by facilitating automated large-scale pattern analysis, entity resolution across databases, predictive risk assessment, and ongoing behavioral scrutiny.
What are the Pentagon’s objectives?
The Pentagon contends that it ought to be able to utilize Anthropic’s technology for any lawful application it finds necessary, instead of being restricted by Anthropic’s internal regulations regarding matters such as autonomous weapons or surveillance.
More specifically, Secretary Hegseth has claimed that the Department of Defense should not be confined by a vendor’s stipulations and that it intends to exercise “lawful use” of the technology.
Sean Parnell, the Pentagon’s chief spokesperson, stated in a Thursday X post that the department has no intention of conducting mass domestic surveillance or deploying autonomous weapons.
“Here’s what we’re asking: Permit the Pentagon to utilize Anthropic’s model for all lawful purposes,” Parnell expressed. “This is a straightforward, practical request that will prevent Anthropic from jeopardizing vital military operations and potentially endangering our warfighters. We will not allow ANY corporation to dictate the terms of our operational decisions.”
He added that Anthropic has until 5:01 p.m. ET on Friday to reach a decision. “If not, we will rescind our partnership with Anthropic and classify them as a supply chain risk for DOW,” he mentioned.
Despite the DoD’s position that it does not believe it should be limited by a corporation’s usage policies, Secretary Hegseth’s apprehensions regarding Anthropic have occasionally appeared linked to cultural dissatisfaction. In a speech at SpaceX and xAI offices in January, Hegseth passionately criticized “woke AI,” which some saw as a precursor to his conflict with Anthropic.
“Department of War AI will not embody woke ideals,” Hegseth stated. “We are developing combat-ready weapons and systems, not chatbots suited for an Ivy League faculty lounge.”
What comes next?
The Pentagon has threatened to either label Anthropic as a “supply chain risk” — essentially blacklisting Anthropic from government contracts — or to invoke the Defense Production Act (DPA) to compel the company to adapt its model to meet military demands. Hegseth has given Anthropic until 5:01 p.m. on Friday to respond. With the deadline looming, it remains uncertain whether the Pentagon will follow through with its threats.
This is a conflict neither side can easily abandon. Sachin Seth, a venture capitalist at Trousdale Ventures specializing in defense technology, suggests that a supply chain risk designation for Anthropic could mean “lights out” for the company.
However, he noted that if Anthropic is dismissed from the DoD, it could pose a national security concern.
“[The Department] would have to wait six to 12 months for either OpenAI or xAI to close the gap,” Seth informed TechCrunch. “That leaves a period of up to a year during which they might be operating from an inferior model, namely the second or third best.”
xAI is preparing to become classified-ready and supplant Anthropic, and it’s reasonable to suggest that given owner Elon Musk’s statements about the issue, the company would willingly offer the DoD complete control over its technology. Recent information indicates that OpenAI may adhere to the same limitations as Anthropic.