Silicon Valley worries the AI safety proponents

Silicon Valley worries the AI safety proponents

This week, figures in Silicon Valley, notably White House AI & Crypto Czar David Sacks and OpenAI’s Chief Strategy Officer Jason Kwon, sparked discussions online regarding AI safety advocacy groups. They claimed in isolated remarks that some proponents of AI safety are not as altruistic as they seem and may be serving their own interests or those of wealthy backers.

AI safety organizations that communicated with TechCrunch described Sacks and OpenAI’s allegations as a fresh effort by Silicon Valley to intimidate critics, although it’s far from the first instance. In 2024, some venture capital companies circulated false claims that a California AI safety legislation, SB 1047, would lead to imprisonment for startup founders. The Brookings Institution categorized this falsehood as one among numerous “misrepresentations” regarding the bill, which was ultimately vetoed by Governor Gavin Newsom.

Regardless of whether Sacks and OpenAI aimed to intimidate dissenters, the effect of their actions has sufficiently unsettled numerous AI safety advocates. Many nonprofit leaders contacted by TechCrunch last week requested anonymity to shield their organizations from possible backlash.

The situation highlights the increasing friction in Silicon Valley between the responsible development of AI and the push for it to become a dominant consumer product — a subject that my colleagues Kirsten Korosec, Anthony Ha, and I delve into on this week’s Equity podcast. Additionally, we discuss a new AI safety law passed in California aimed at regulating chatbots, along with OpenAI’s stance on erotica in ChatGPT.

On Tuesday, Sacks posted on X, asserting that Anthropic — which has voiced concerns regarding AI’s contribution to unemployment, cyberattacks, and severe societal harms — is merely engaging in fearmongering to facilitate legislation that favors itself and overwhelms smaller startups with bureaucracy. Anthropic was the sole significant AI lab to back California’s Senate Bill 53 (SB 53), which imposes safety reporting obligations on large AI firms and was enacted into law last month.

Sacks reacted to a viral essay from Anthropic co-founder Jack Clark concerning his anxieties about AI. Clark delivered the essay in a speech at the Curve AI safety conference in Berkeley weeks prior. To those listening, it certainly seemed like a sincere expression of a technologist’s concerns regarding his creations, but Sacks perceived it differently.

Sacks contended that Anthropic is executing a “sophisticated regulatory capture strategy,” although it’s notable that a genuinely sophisticated approach likely wouldn’t involve antagonizing the federal government. In a subsequent post on X, Sacks pointed out that Anthropic has consistently positioned “itself as an adversary to the Trump administration.”

Techcrunch event

San Francisco
|
October 27-29, 2025

This week, OpenAI’s chief strategy officer, Jason Kwon, posted on X detailing why the company was issuing subpoenas to AI safety nonprofits, like Encode, an organization advocating for responsible AI policies. (A subpoena is a legal instruction requiring documents or testimony.) Kwon explained that after Elon Musk initiated legal action against OpenAI — concerning his allegation that the creator of ChatGPT has strayed from its nonprofit objectives — OpenAI found it suspicious that several organizations also expressed opposition to its restructuring. Encode had submitted an amicus brief in favor of Musk’s lawsuit, and other nonprofits publicly criticized OpenAI’s restructuring.

“This raised transparency questions about who was providing them support and if there was any collaboration,” Kwon remarked.

NBC News reported recently that OpenAI issued expansive subpoenas to Encode and six other nonprofits that criticized the company, requesting their communications relating to two of OpenAI’s main adversaries, Musk and Meta CEO Mark Zuckerberg. OpenAI also inquired of Encode regarding its communications supporting SB 53.

A notable leader in AI safety informed TechCrunch that an increasing division exists between OpenAI’s government affairs division and its research team. While OpenAI’s safety researchers frequently release reports detailing the hazards of AI systems, OpenAI’s policy team opposed SB 53, expressing a preference for uniform regulations at the federal level.

Joshua Achiam, OpenAI’s head of mission alignment, commented on the company’s decision to send subpoenas to nonprofits in a post on X this week.

“At what could possibly jeopardize my entire career, I will state: this doesn’t appear beneficial,” Achiam said.

Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not received a subpoena from OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led plot. However, he contends this is inaccurate, and that a significant part of the AI safety community is quite critical of xAI’s safety processes, or lack thereof.

“On OpenAI’s part, this is meant to silence critics, intimidate them, and deter other nonprofits from following suit,” said Steinhauser. “For Sacks, I believe he’s worried that [the AI safety] movement is expanding and people want to demand accountability from these firms.”

Sriram Krishnan, the senior policy advisor for AI at the White House and a former a16z general partner, contributed to the dialogue this week with a social media post of his own, asserting that AI safety advocates are disconnected from reality. He encouraged AI safety groups to engage with “individuals in the real world utilizing, selling, and adopting AI in their households and organizations.”

A recent Pew study revealed that about half of Americans feel more apprehensive than enthusiastic about AI, but it’s uncertain what specifically troubles them. Another recent survey offered more insights and discovered that American voters prioritize concerns about job losses and deepfakes over catastrophic threats posed by AI, which the AI safety movement primarily emphasizes.

Addressing these safety issues may hinder the rapid expansion of the AI sector — a compromise that alarms many in Silicon Valley. Given that AI investment underpins much of America’s economy, the anxiety surrounding excessive regulation is understandable.

However, after years of unchecked AI development, the AI safety movement seems to be acquiring genuine traction as it approaches 2026. Silicon Valley’s efforts to counter activist groups focused on safety might indicate their influence is manifesting.

[embedded content]