
On Friday afternoon, right as this interview commenced, a news notification popped up on my screen: the Trump administration was cutting connections with Anthropic, the AI firm based in San Francisco that was established in 2021 by Dario Amodei. Shortly thereafter, Defense Secretary Pete Hegseth invoked national security legislation to blacklist the company from engaging with the Pentagon after Amodei declined to permit Anthropic’s technology to be utilized for widespread surveillance of American citizens or for autonomous armed drones capable of selecting and eliminating targets without human oversight.
It was an astonishing turn of events. Anthropic risks forfeiting a contract valued at up to $200 million and may be prohibited from collaborating with other defense contractors after President Trump shared a post on Truth Social instructing every federal agency to “immediately halt all usage of Anthropic technology.” (Anthropic has since stated it will contest the Pentagon’s actions in court.)
Max Tegmark has dedicated much of the past decade to warning that the competition to develop increasingly potent AI systems is advancing quicker than the world’s capacity to manage them. The MIT physicist founded the Future of Life Institute in 2014 and in 2023 played a key role in drafting an open letter — ultimately endorsed by over 33,000 individuals, including Elon Musk — calling for a halt in the development of advanced AI.
His perspective on the Anthropic situation is unflinching: the company, like its competitors, has contributed to its own dilemma. Tegmark argues that the issue starts not with the Pentagon but with a decision made years ago — a collective choice within the industry to resist regulation. Anthropic, OpenAI, Google DeepMind, and others have long asserted their commitment to self-regulation. This week, Anthropic even abandoned the cornerstone of its own safety assurance — the promise not to release increasingly powerful AI systems until it was assured they wouldn’t pose a risk.
Now, without established regulations, there isn’t much safeguarding these entities, Tegmark notes. Here’s more from that discussion, condensed for brevity and clarity. You can listen to the entire dialogue next week on TechCrunch’s StrictlyVC Download podcast.
When you learned of the news regarding Anthropic, what was your initial response?
The path to disaster is lined with good intentions. It’s fascinating to reflect back a decade, when people were filled with optimism about how we would harness artificial intelligence to cure cancer, enhance prosperity in America, and strengthen the nation. And now we find ourselves with the U.S. government angered at this company for not wanting AI to be employed for domestic mass surveillance of Americans, and also opposing the creation of killer robots that can independently — without any human intervention — determine whom to kill.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Anthropic has built its entire identity around being a safety-first AI company, yet it was engaged with defense and intelligence organizations [tracing back to at least 2024]. Do you find that at all contradictory?
It is indeed contradictory. If I may offer a somewhat cynical perspective — yes, Anthropic has been quite adept at marketing itself as safety-oriented. However, when one scrutinizes the reality rather than the assertions, it becomes apparent that Anthropic, OpenAI, Google DeepMind, and xAI have all spoken extensively about their dedication to safety. None have advocated for binding safety regulations like those found in other industries. All four of these companies have now failed to uphold their own promises. First, we witnessed Google with its grand slogan, ‘Don’t be evil.’ Then they abandoned that. Following that, they discarded another broader commitment that essentially stated they would not inflict harm with AI. They moved away from this in order to sell AI for surveillance and military applications. OpenAI recently removed the term safety from its mission statement. xAI disbanded their entire safety team. And now Anthropic, earlier this week, renounced its most critical safety commitment — the pledge not to deploy advanced AI systems until it was assured they wouldn’t cause damage.
How did entities that made such significant safety pledges end up in this predicament?
All of these companies, particularly OpenAI and Google DeepMind, but to a degree Anthropic as well, have consistently lobbied against AI regulation, asserting, ‘Just trust us, we’ll self-regulate.’ And they’ve been effective in their lobbying efforts. Consequently, we currently have less oversight over AI systems in America than we do over sandwiches. For instance, if you wish to open a sandwich shop and a health inspector discovers 15 rats in the kitchen, they won’t permit you to sell any sandwiches until rectified. Conversely, if you claim, ‘Don’t worry, I won’t sell sandwiches; I’m going to offer AI girlfriends for 11-year-olds, which have been associated with suicides previously, and then I’m going to launch something called superintelligence that could potentially overthrow the U.S. government, but I feel good about mine’ — the inspector has to respond, ‘Alright, proceed, just avoid selling sandwiches.’
There are food safety regulations but no AI regulations.
And I hold all of these companies collectively accountable for this. Because if they had taken seriously all the promises they made previously about being safe and virtuous, and united to approach the government, asking, ‘Please transform our voluntary commitments into U.S. law that mandates even our most negligent competitors’ — this situation might have been avoided. Instead, we’re facing a comprehensive regulatory void. History has shown us what results from complete corporate immunity: thalidomide, tobacco firms marketing cigarettes to children, asbestos leading to lung cancer. It’s somewhat ironic that their own steadfastness against establishing laws delineating acceptable and unacceptable practices regarding AI is now rebounding against them.
There currently exists no law prohibiting the creation of AI to kill Americans, so the government can simply demand it. Had the companies previously advocated for, ‘We desire this law,’ they wouldn’t find themselves in this predicament. They effectively shot themselves in the foot.
The counter-argument from the companies is always centered around the competition with China — if American firms don’t pursue certain paths, Beijing will. Does that argument hold true?
Let’s dissect that. The most prevalent talking point from the lobbyists representing AI companies — who are now better funded and more numerous than the combined lobbyists from the fossil fuel, pharmaceutical, and military-industrial sectors — is that whenever any regulatory proposals are made, they say, ‘But China.’ So let’s examine that. China is currently working on banning AI girlfriends outright. Not merely age limits — they’re contemplating a ban on all anthropomorphic AI. Why? Not to appease America, but because they believe this is detrimental to Chinese youth and makes China vulnerable. Clearly, it’s also weakening American youth.
When advocates state we must compete to develop superintelligence to gain the upper hand against China — while we lack clarity on how to control superintelligence, leading to the potential of humanity losing command of the Earth to alien machines — it’s apparent that the Chinese Communist Party favors control. Who in their right mind believes Xi Jinping will allow a Chinese AI entity to develop something that could topple the Chinese regime? Absolutely no chance. This scenario also poses a substantial threat to the American government — if it were to be overthrown by the first American enterprise to achieve superintelligence. This constitutes a national security risk.
That’s a compelling perspective — framing superintelligence as a national security risk instead of an asset. Do you perceive this viewpoint gaining traction in Washington?
I believe that if individuals within the national security sector hear Dario Amodei portray his vision — he delivered a notable speech where he mentioned that we’ll soon have a nation of geniuses housed in a data center — they might start pondering: ‘Wait, did Dario just mention the word nation? Perhaps I should include that nation of geniuses in a data center in the same threat assessment I’m tracking, as that sounds perilous for the U.S. government.’ And I expect that in the near future, a sufficient number of individuals in the U.S. national security sector will recognize that uncontrollable superintelligence is a threat, not merely a tool. This situation is entirely analogous to the Cold War. There was a quest for supremacy — economically and militarily — against the Soviet Union. We, as Americans, won that round without engaging in the subsequent conflict, which would determine who could create the most nuclear craters in the opposing superpower. People realized that such an approach was suicidal. No one prevails. The same reasoning is relevant in this context.
What implications does this hold for the overall pace of AI advancement? And how near do you think we are to the systems you’re discussing?
Six years ago, nearly every AI expert I was acquainted with predicted we were decades away from developing AI capable of mastering language and knowledge at human levels — perhaps 2040, perhaps 2050. They were all mistaken, as we currently possess that capability. AI has advanced swiftly from high school proficiency to college, PhD level, and even to university professor standards in certain domains. Last year, AI secured the gold medal at the International Mathematics Olympiad, which represents one of the most complex human tasks. I co-authored a paper with Yoshua Bengio, Dan Hendrycks, and other leading AI researchers just a few months back, providing a rigorous definition of AGI. By this definition, GPT-4 was at 27% of the way there. GPT-5 was at 57% of the way. Therefore, we aren’t there yet, but the leap from 27% to 57% so rapidly indicates that it may not be long before we reach that point.
When I lectured to my students yesterday at MIT, I informed them that even if it takes four years, when they graduate they might find job opportunities severely limited. It’s certainly time to begin preparations.
With Anthropic now on the blacklist, I’m intrigued to observe what unfolds — will the other AI giants support it and declare, ‘We won’t pursue this either?’ Or will a company like xAI step forward and state, ‘Anthropic didn’t want that contract; we’re on board’? [Editor’s note: Hours after the interview, OpenAI announced its own agreement with the Pentagon.]
Last night, Sam Altman stated he stands with Anthropic and shares the same ethical boundaries. I respect him for his bravery in voicing that stance. Google, at the time this interview commenced, had yet to issue any response. If they remain silent, it would be profoundly embarrassing for them as a corporation, and much of their workforce is likely to feel similarly. We also haven’t received any word from xAI yet. It will be fascinating to witness how this unfolds. Essentially, this moment requires everyone to reveal their true colors.
Is there a scenario in which the outcome could be favorable?
Yes, and this is why I find myself oddly optimistic. There’s such a clear alternative available. If we begin to treat AI companies like we do any other enterprises — eliminating corporate amnesty — they would necessary undertake something akin to a clinical trial prior to releasing such potent products, demonstrating to independent experts their capability to manage it. In turn, we could usher in a golden age marked by the benefits of AI, devoid of existential dread. Although that’s not the trajectory we’re currently pursuing, it remains a possibility.

