Silicon Valley worries the AI safety proponents

Silicon Valley worries the AI safety proponents

This week, figures in Silicon Valley, notably White House AI & Crypto Czar David Sacks and OpenAI’s Chief Strategy Officer Jason Kwon, sparked discussions online regarding AI safety advocacy groups. They claimed in isolated remarks that some proponents of AI safety are not as altruistic as they seem and may be serving their own interests or those of wealthy backers.

AI safety organizations that communicated with TechCrunch described Sacks and OpenAI’s allegations as a fresh effort by Silicon Valley to intimidate critics, although it’s far from the first instance. In 2024, some venture capital companies circulated false claims that a California AI safety legislation, SB 1047, would lead to imprisonment for startup founders. The Brookings Institution categorized this falsehood as one among numerous “misrepresentations” regarding the bill, which was ultimately vetoed by Governor Gavin Newsom.

Regardless of whether Sacks and OpenAI aimed to intimidate dissenters, the effect of their actions has sufficiently unsettled numerous AI safety advocates. Many nonprofit leaders contacted by TechCrunch last week requested anonymity to shield their organizations from possible backlash.

The situation highlights the increasing friction in Silicon Valley between the responsible development of AI and the push for it to become a dominant consumer product — a subject that my colleagues Kirsten Korosec, Anthony Ha, and I delve into on this week’s Equity podcast. Additionally, we discuss a new AI safety law passed in California aimed at regulating chatbots, along with OpenAI’s stance on erotica in ChatGPT.

On Tuesday, Sacks posted on X, asserting that Anthropic — which has voiced concerns regarding AI’s contribution to unemployment, cyberattacks, and severe societal harms — is merely engaging in fearmongering to facilitate legislation that favors itself and overwhelms smaller startups with bureaucracy. Anthropic was the sole significant AI lab to back California’s Senate Bill 53 (SB 53), which imposes safety reporting obligations on large AI firms and was enacted into law last month.

Sacks reacted to a viral essay from Anthropic co-founder Jack Clark concerning his anxieties about AI. Clark delivered the essay in a speech at the Curve AI safety conference in Berkeley weeks prior. To those listening, it certainly seemed like a sincere expression of a technologist’s concerns regarding his creations, but Sacks perceived it differently.

Sacks contended that Anthropic is executing a “sophisticated regulatory capture strategy,” although it’s notable that a genuinely sophisticated approach likely wouldn’t involve antagonizing the federal government. In a subsequent post on X, Sacks pointed out that Anthropic has consistently positioned “itself as an adversary to the Trump administration.”

Techcrunch event

San Francisco
|
October 27-29, 2025

This week, OpenAI’s chief strategy officer, Jason Kwon, posted on X detailing why the company was issuing subpoenas to AI safety nonprofits, like Encode, an organization advocating for responsible AI policies. (A subpoena is a legal instruction requiring documents or testimony.) Kwon explained that after Elon Musk initiated legal action against OpenAI — concerning his allegation that the creator of ChatGPT has strayed from its nonprofit objectives — OpenAI found it suspicious that several organizations also expressed opposition to its restructuring. Encode had submitted an amicus brief in favor of Musk’s lawsuit, and other nonprofits publicly criticized OpenAI’s restructuring.

“This raised transparency questions about who was providing them support and if there was any collaboration,” Kwon remarked.

NBC News reported recently that OpenAI issued expansive subpoenas to Encode and six other nonprofits that criticized the company, requesting their communications relating to two of OpenAI’s main adversaries, Musk and Meta CEO Mark Zuckerberg. OpenAI also inquired of Encode regarding its communications supporting SB 53.

A notable leader in AI safety informed TechCrunch that an increasing division exists between OpenAI’s government affairs division and its research team. While OpenAI’s safety researchers frequently release reports detailing the hazards of AI systems, OpenAI’s policy team opposed SB 53, expressing a preference for uniform regulations at the federal level.

Joshua Achiam, OpenAI’s head of mission alignment, commented on the company’s decision to send subpoenas to nonprofits in a post on X this week.

“At what could possibly jeopardize my entire career, I will state: this doesn’t appear beneficial,” Achiam said.

Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not received a subpoena from OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led plot. However, he contends this is inaccurate, and that a significant part of the AI safety community is quite critical of xAI’s safety processes, or lack thereof.

“On OpenAI’s part, this is meant to silence critics, intimidate them, and deter other nonprofits from following suit,” said Steinhauser. “For Sacks, I believe he’s worried that [the AI safety] movement is expanding and people want to demand accountability from these firms.”

Sriram Krishnan, the senior policy advisor for AI at the White House and a former a16z general partner, contributed to the dialogue this week with a social media post of his own, asserting that AI safety advocates are disconnected from reality. He encouraged AI safety groups to engage with “individuals in the real world utilizing, selling, and adopting AI in their households and organizations.”

A recent Pew study revealed that about half of Americans feel more apprehensive than enthusiastic about AI, but it’s uncertain what specifically troubles them. Another recent survey offered more insights and discovered that American voters prioritize concerns about job losses and deepfakes over catastrophic threats posed by AI, which the AI safety movement primarily emphasizes.

Addressing these safety issues may hinder the rapid expansion of the AI sector — a compromise that alarms many in Silicon Valley. Given that AI investment underpins much of America’s economy, the anxiety surrounding excessive regulation is understandable.

However, after years of unchecked AI development, the AI safety movement seems to be acquiring genuine traction as it approaches 2026. Silicon Valley’s efforts to counter activist groups focused on safety might indicate their influence is manifesting.

[embedded content]

Senate Republicans created a deepfake of Chuck Schumer, and X hasn’t removed it.

Senate Republicans created a deepfake of Chuck Schumer, and X hasn’t removed it.

Republican senators circulated a deepfake clip of Chuck Schumer, the Senate’s minority leader, aimed at making it appear as though Democrats are rejoicing over the ongoing government shutdown, now in its 16th day.  

Within the deepfake, an AI-fabricated Schumer states, “every day gets better for us,” a genuine remark taken out of context from a Punchbowl News article. In the original report, Schumer talked about the Democrats’ healthcare-centered strategy during the shutdown, asserting they would not retreat from the Republicans’ tactics of threats and “bambooz[ling].” 

The shutdown stems from the inability of Democrats and Republicans to reach consensus on a bill to fund the government through October and beyond. Democrats aim to preserve tax credits that would reduce health insurance costs for millions of Americans, reverse Trump’s cuts to Medicaid, and prevent reductions to government health organizations.

This video was shared on Friday via the Senate Republicans’ X account. As per X’s regulations, the site bans “deceptively shar[ing] synthetic or manipulated media that are likely to cause harm.” Such harmful media may “mislead people” or “create significant confusion on public issues.” 

Enforcement options include content removal, warning labels, or reduced visibility. As of this writing, X has not removed the deepfake nor appended a warning label — although the video features a watermark indicating its AI creation. 

This Schumer video isn’t the first instance where X has permitted deepfakes of politicians to remain available on the platform. In late 2024, X owner Elon Musk disseminated a manipulated clip of former vice president Kamala Harris prior to the election, igniting discussions about misleading voters.  

TechCrunch has reached out to X for a statement.

Techcrunch event

San Francisco
|
October 27-29, 2025

As many as 28 states have passed laws forbidding deepfakes of political figures, particularly concerning campaigns and elections, though most don’t completely restrict them if proper disclosures are present. California, Minnesota, and Texas have prohibited deepfakes intended to sway elections, mislead voters, or harm candidates.

This recent post comes mere weeks after President Donald Trump shared deepfakes on Truth Social showing Schumer and Hakeem Jeffries, the House minority leader, making false claims about immigration and voter fraud. 

In defense of the criticism regarding a lack of honesty and ethics, Joanna Rodriguez, communications director for the National Republican Senatorial Committee, stated: “AI is here and not going anywhere. Adapt & win or pearl clutch & lose.” 

Your AI tools operate on fracked gas and leveled Texas terrain

Your AI tools operate on fracked gas and leveled Texas terrain

The AI revolution is revitalizing fracking, a surprising development for an industry that faced criticism from climate activists during its boom in the early 2010s for contaminating water sources, triggering earthquakes, and maintaining reliance on fossil fuels.

Companies in the AI sector are establishing large data centers near significant gas extraction locations, frequently producing their own energy by directly utilizing fossil fuels. This trend, often eclipsed by discussions linking AI to healthcare and climate change interventions, could transform and pose challenging dilemmas for the communities that accommodate these operations.

Consider the most recent instance. This week, the Wall Street Journal revealed that Poolside, an AI coding assistant startup, is developing a data center complex spanning over 500 acres in West Texas — situated about 300 miles from Dallas — which is approximately two-thirds the size of Central Park. The center will generate its own power by extracting natural gas from the Permian Basin, the most productive oil and gas field in the US, where hydraulic fracturing is not only prevalent but essentially the sole option available.

The initiative, referred to as Horizon, is set to generate two gigawatts of computational power. This is comparable to the total electrical capacity of the Hoover Dam, except instead of harnessing the Colorado River, it is combusting fracked gas. Poolside is collaborating with CoreWeave, a cloud computing firm that leases access to Nvidia AI chips and is supplying more than 40,000 of them. The Journal describes it as an “energy Wild West,” which seems fitting.

However, Poolside is not alone in this approach. Almost all major players in the AI industry are implementing comparable strategies. Last month, OpenAI CEO Sam Altman visited his company’s main Stargate data center in Abilene, Texas — located roughly 200 miles from the Permian Basin — where he was transparent, stating, “We’re burning gas to run this data center.”

The complex necessitates about 900 megawatts of power across eight buildings and features a new gas-fired power plant utilizing turbines similar to those employed in naval vessels, as reported by the Associated Press. The companies assert that the plant provides only backup power, with the majority of electricity sourced from the local grid. For reference, that grid combines energy from natural gas along with extensive wind and solar farms in West Texas.

Yet, residents living near these developments are not exactly reassured. Arlene Mendler resides directly across from Stargate. In an interview with the AP, she expressed regret that no one consulted her before bulldozers cleared a large area of mesquite shrubland for the construction underway.

Techcrunch event

San Francisco
|
October 27-29, 2025

“It has completely altered our way of life,” Mendler told the AP. Relocating to the area 33 years ago in search of “peace, quiet, tranquility,” she now finds that construction noise dominates the background, and bright lights have marred her nighttime vistas.

Then there’s the issue of water. In drought-prone West Texas, local residents are particularly worried about the effects of new data centers on water availability. At the time of Altman’s visit, the city’s reservoirs were at about half capacity, and residents were under a twice-weekly outdoor watering schedule. Oracle asserts that each of the eight buildings will use only 12,000 gallons per year after an initial million-gallon reservoir for closed-loop cooling systems. However, Shaolei Ren, a professor at the University of California, Riverside, who studies the environmental impact of AI, informed the AP that this claim is misleading. These systems consume more electricity, implying increased indirect water usage at the power plants supplying that electricity.

Meta is following a similar path. In Richland Parish, the poorest area of Louisiana, the company intends to construct a $10 billion data center equivalent to 1,700 football fields that will demand two gigawatts of power for computation alone. The utility company Entergy will invest $3.2 billion in three large natural-gas plants with 2.3 gigawatts of capacity to supply the facility by combusting gas extracted through fracking in the nearby Haynesville Shale. Louisiana residents, like those in Abilene, are not pleased to be surrounded by continuous construction activity.

(Meta is also expanding in Texas, but in a different part of the state. This week, the company announced a $1.5 billion data center in El Paso, close to the New Mexico border, with one gigawatt of capacity anticipated to be operational by 2028. El Paso is not adjacent to the Permian Basin, and Meta claims the facility will be powered by 100% clean and renewable energy. One point for Meta.)

Even Elon Musk’s xAI, whose facility in Memphis has stirred significant controversy this year, has links to fracking. Memphis Light, Gas and Water — which currently supplies power to xAI but will ultimately own the substations xAI is establishing — procures natural gas on the spot market and transports it to Memphis through two companies: Texas Gas Transmission Corp. and Trunkline Gas Company.

Texas Gas Transmission operates a bidirectional pipeline transporting natural gas from Gulf Coast supply zones and several major hydraulically fractured shale formations through Arkansas, Mississippi, Kentucky, and Tennessee. Trunkline Gas Company, the additional supplier for Memphis, also moves natural gas from fracked sources.

If you’re questioning why AI companies are taking this approach, they’ll tell you it’s not solely about electricity; it’s also about outpacing China.

This argument was articulated by Chris Lehane last week. Lehane, an experienced political operative who joined OpenAI as vice president of global affairs in 2024, presented the case during an interview on stage with TechCrunch.

“We believe that in the near future, at least in the U.S., and indeed globally, we will need to generate around a gigawatt of energy each week,” Lehane stated. He highlighted China’s expansive energy development: 450 gigawatts and 33 nuclear facilities erected in the previous year alone.

When TechCrunch inquired about Stargate’s choice to build in economically challenged locations like Abilene or Lordstown, Ohio, where further gas-powered plants are slated, Lehane returned to the issue of geopolitics. “If we [as a nation] manage this properly, we have an opportunity to re-industrialize nations, bring manufacturing back, and also transition our energy systems to ensure the necessary modernization occurs.”

The Trump administration is undoubtedly supportive of this agenda. The executive order issued in July 2025 accelerates gas-powered AI data centers by simplifying environmental permits, providing financial incentives, and opening federal lands for projects relying on natural gas, coal, or nuclear power — explicitly excluding renewables from support.

At present, most AI users remain largely unaware of the carbon footprint associated with their impressive new tools and resources. They focus more on functionalities like Sora 2 — OpenAI’s highly realistic video generation product that consumes significantly more energy than a simple chatbot — rather than on the sources of the electricity involved.

The companies are banking on this lack of awareness. They’ve presented natural gas as the practical, necessary solution to the increasing energy demands of AI. However, the rapid and extensive expansion of fossil fuel resources requires more scrutiny than it is currently receiving.

If this situation constitutes a bubble, it will not be pretty. The AI industry has transformed into a cyclical reliance on dependencies: OpenAI relies on Microsoft which depends on Nvidia which needs Broadcom which requires Oracle which depends on data center operators who, in turn, need OpenAI. They are all engaged in mutual buying and selling in a self-reinforcing cycle. The Financial Times pointed out this week that if this foundation falters, a significant amount of costly infrastructure, both digital and gas-burning, will be left behind.

OpenAI’s ability to fulfill its obligations is “increasingly a concern for the broader economy,” according to the outlet.

A crucial question that has largely been overlooked is whether all this newly generated capacity is even required. A study from Duke University found that utilities typically only utilize 53% of their available capacity year-round. This indicates substantial capacity for accommodating new demand without needing to construct additional power plants, as noted earlier this year by MIT Technology Review.

The Duke researchers estimate that if data centers cut their electricity use by roughly half for just a few hours during peak demand periods each year, utilities could accommodate an additional 76 gigawatts of new load. This would effectively satisfy the projected 65 gigawatts of demand from data centers by 2029.

Such flexibility would enable companies to establish AI data centers more swiftly. More importantly, it could offer a respite from the rush to construct natural gas facilities, allowing utilities the time to develop more sustainable alternatives.

But again, that would imply conceding ground to an authoritarian regime, according to Lehane and many others in the field, so it appears the current fracking boom is likely to burden regions with additional fossil-fuel plants and leave residents facing escalating electricity bills to cover today’s expenditures, even long after the tech companies’ agreements have lapsed.

For example, Meta has committed to covering Entergy’s expenses for the new Louisiana power sources for 15 years. Similarly, Poolside’s agreement with CoreWeave spans 15 years. The implications for customers once these contracts expire remain uncertain.

Changes may eventually occur. A significant amount of private funding is being directed toward small modular reactors and solar projects, assuming these cleaner energy solutions will become more significant power sources for these data centers. Fusion startups like Helion and Commonwealth Fusion Systems have also secured substantial funding from AI leaders, including Nvidia and Altman.

This positivity isn’t confined to private investment circles. The enthusiasm has extended to public markets, where several “non-revenue-generating” energy firms that have managed to go public boast what appear to be forward-thinking market valuations, based on the belief that they will eventually power these data centers.

In the meantime — which could still span decades — the most pressing concern is that those who will ultimately bear the financial and environmental burdens never requested any of this in the first place.