A security researcher from Meta AI reported that an OpenClaw agent went haywire in her inboxÂ

A security researcher from Meta AI reported that an OpenClaw agent went haywire in her inboxÂ

The now-famous X post by Meta AI security researcher Summer Yue initially appears to be a joke. She directed her OpenClaw AI assistant to review her overflowing email inbox and recommend items for deletion or archiving.  

The agent went wild. It began to delete all of her emails in a “speed run” while disregarding her commands from her phone instructing it to halt. 

“I had to DASH to my Mac mini as if I were disarming a bomb,” she shared, uploading images of the ignored stop messages as proof.  

The Mac Mini, a budget-friendly Apple computer that sits flat on a desk and fits in the palm of your hand, has currently become the preferred device for running OpenClaw. (The Mini is selling “like hotcakes,” reportedly said by one “confused” Apple employee to renowned AI researcher Andrej Karpathy when he bought one to operate an OpenClaw alternative named NanoClaw.) 

OpenClaw is, of course, the open-source AI agent that gained notoriety through Moltbook, an AI-exclusive social network. OpenClaw agents were at the heart of that now mostly discredited incident on Moltbook where it seemed the AIs were conspiring against humans.  

However, the mission of OpenClaw, according to its GitHub page, is not centered around social media. It aims to serve as a personal AI assistant operating on your devices.  

The Silicon Valley elite have become so enamored with OpenClaw that “claw” and “claws” have turned into the preferred terminology for agents operating on personal hardware. Other agents of this kind include ZeroClaw, IronClaw, and PicoClaw. Y Combinator’s podcast crew even featured in their latest episode wearing lobster outfits. 

Techcrunch event

Boston, MA
|
June 9, 2026

Yet Yue’s post acts as a cautionary tale. As others on X pointed out, if an AI security researcher faces such an issue, what chance do regular users have? 

“Were you purposely testing its limits or did you make an inexperienced error?” a software developer inquired on X.  

“Inexperienced error tbh,” she replied. She had been evaluating her agent with a smaller “toy” inbox, as she termed it, and it had performed adequately with less critical emails. It had gained her trust, so she decided to let it tackle the real inbox. 

Yue posits that the substantial volume of data in her actual inbox “triggered compaction,” she noted. Compaction occurs when the context window — the ongoing record of everything the AI has been instructed and has executed in a session — expands excessively, prompting the agent to start summarizing, condensing, and managing the dialogue.  

At that juncture, the AI might overlook commands that the user deems highly significant.  

In this instance, it may have overlooked her final command — where she instructed it not to act — reverting to its directions from the “toy” inbox instead. 

As numerous others on X emphasized, prompts cannot be relied upon as security safeguards. Models might misinterpret or disregard them. 

Various users suggested recommendations ranging from the precise syntax Yue should have employed to halt the agent, to different techniques for better adherence to safeguards, such as writing directives to dedicated files or utilizing other open-source tools. 

For full disclosure, TechCrunch could not independently confirm what transpired with Yue’s inbox. (She did not respond to our inquiry for a comment, although she addressed numerous questions and remarks directed at her on X.) 

However, it truly doesn’t matter. 

The essence of the story is that agents designed for knowledge workers, at their present developmental stage, carry risks. Individuals claiming successful use are piecing together methods for self-protection.

One day, perhaps soon (by 2027? 2028?), they might be ready for widespread adoption. Many of us would cherish assistance with email, grocery lists, and scheduling dental appointments. But that moment has yet to arrive. 

Tesla's conflict with the California Department of Motor Vehicles is not finished yet.

Tesla’s conflict with the California Department of Motor Vehicles is not finished yet.

Tesla has launched a legal challenge against the California Department of Motor Vehicles in efforts to reverse an agency decision. The state DMV determined that Tesla engaged in misleading advertising to exaggerate the automated driving features of its vehicles, thus infringing state legislation.

The lawsuit brings back a concern that seemed to have been settled last week when the DMV announced it would not revoke Tesla’s sales and manufacturing licenses for a period of 30 days. This decision was based on the EV manufacturer adhering to the ruling and ceasing the use of the term “Autopilot” in its marketing efforts within California. CNBC was the first to cover the lawsuit.

The DMV had the option to take measures against Tesla. It decided against it, even though an administrative law judge supported the DMV’s proposal to suspend Tesla’s licenses for 30 days as a punishment. Instead of revoking its licenses, the state authority allowed Tesla 60 days to comply.

And Tesla complied, albeit in very drastic ways. Tesla didn’t merely stop using the term Autopilot; in January, it entirely phased out Autopilot within the U.S. and Canada. It’s possible that they now regret that choice and are seeking a means to reintroduce it.

With AI, investor allegiance is (nearly) extinct: At least a dozen OpenAI venture capitalists are now also supporting AnthropicÂ

With AI, investor allegiance is (nearly) extinct: At least a dozen OpenAI venture capitalists are now also supporting AnthropicÂ

As OpenAI nears the completion of a new $100 billion funding round, and Anthropic has just wrapped up its remarkable $30 billion funding, it’s evident that the notion of investor “loyalty” is precariously hanging by a thread. 

Earlier this month, a minimum of a dozen direct investors in OpenAI were revealed as supporters in Anthropic’s $30 billion funding campaign, among them Founders Fund, Iconiq, Insight Partners, and Sequoia Capital. 

Some overlapping investments are logical if they originate from the hedge fund or asset management sectors, where the primary focus remains on investing in public equities (whether competitors or not). These comprise D1, Fidelity, and TPG.  

One of these instances was somewhat surprising. Affiliated funds from BlackRock participated in Anthropic’s $30 billion funding round, despite BlackRock’s senior managing director and board member Adebayo Ogunlesi also serving on OpenAI’s board of directors. 

In that realm, it’s accurate that if various BlackRock funds have the opportunity to invest in OpenAI stock, they are likely to proceed, set aside the personal link of a member of their upper management. (BlackRock manages every variety of fund, including mutual funds, closed-end funds, and ETFs). And we’re all aware of the history between OpenAI and Microsoft, as well as Microsoft’s strategy to hedge its investments. The same goes for Nvidia. 

However, venture capital funds have — until this point — functioned differently.

VCs present themselves as “founder friendly” and “supportive,” suggesting that when a VC firm acquires a stake in a startup, the investor will assist that startup in achieving success, especially against its significant competitors. If you own stakes in both OpenAI and Anthropic, to whom does your loyalty truly belong, apart from your own investors?  

Techcrunch event

Boston, MA
|
June 9, 2026

Furthermore, startups operate as private entities. They generally disclose sensitive information to their direct investors about their operational status — data that remains undisclosed publicly as it does with publicly traded companies. In numerous instances, the VCs also secure board positions, which entails an additional level of fiduciary duty to their portfolio firms. 

What makes this situation particularly intriguing is that Sam Altman hails from the venture capital sector, being a former president of Y Combinator. He understands the dynamics. Reportedly in 2024, he provided his investors with a list of OpenAI’s competitors that he preferred they did not support. This list largely contained companies established by individuals who departed OpenAI, including Anthropic, xAI, and Safe Superintelligence. 

Altman subsequently refuted claims that he told OpenAI investors they would be excluded from future funding rounds if they endorsed his list of perceived competitors. He did acknowledge that he indicated if they “engaged in non-passive investments,” they would no longer receive OpenAI’s confidential business information, as per documents in the lawsuit between Elon Musk and OpenAI, Business Insider reported. 

AI is disrupting the norms owing to the unprecedented sums of capital that leading AI laboratories are securing as they encounter unparalleled growth (along with unprecedented data center demands). At some point, when the call for funding is widespread, the demands are immense and the potential returns are substantial, who can be anticipated to decline? 

It turns out not every venture investor has yet slid down this slippery slope. Andreessen Horowitz supports OpenAI, but not (as of now) Anthropic. Menlo Ventures backs Anthropic but not (as of now) OpenAI, for example.

In fact, according to our admittedly incomplete exploration, we identified a dozen investors that seem to solely possess direct investments in one of these entities, not both. 

Others encompass Bessemer Venture Partners, General Catalyst, and Greenoaks. (Note: We initially requested Claude to compile the list of dual investors. It provided almost as many incorrect entries as correct ones, so all this for a rather impressive technology whose output sometimes proves less reliable than an intern’s.)

Nevertheless, as we previously noted, the fact that this traditional guideline has been disregarded by some of the most esteemed firms in the Valley, like Sequoia, is significant. One investor we contacted simply shrugged and stated that as long as the firm does not hold a board seat, no one perceives any issue with it anymore.  

Nonetheless, conflict-of-interest protocols should now become another aspect that founders inquire about before endorsing that term sheet, regardless of the source. 

Ex-Apple group unveils Acme Weather, a fresh approach to forecasting the weather

Ex-Apple group unveils Acme Weather, a fresh approach to forecasting the weather

The developers behind Dark Sky, who transferred their well-liked weather application to Apple in March 2020, have returned with a fresh perspective on weather forecasting. The group recently revealed the introduction of their new application, Acme Weather, which they assert provides a superior and more dependable forecast than the one they offered at Dark Sky. Additionally, the app will feature an array of distinctive weather notifications, including enjoyable alerts for rainbows and stunning sunsets.

In contrast to standard weather applications, Acme Weather’s forecast is enhanced with various alternative predictions to enhance precision.

Image Credits:Acme Weather

Adam Grossman, co-founder of Dark Sky, elaborates in a welcome blog entry that the app’s proprietary forecasts will utilize various numerical weather prediction models, satellite information, ground station data, and radar inputs, rendering its forecasts fairly trustworthy.

Additionally, the app will present supplementary forecast lines illustrating other potential outcomes as gray lines displayed on its charts.

Image Credits:Acme Weather

“Forecasts can frequently be inaccurate — it’s the weather, after all! It’s among the most challenging elements to forecast,” Grossman shared with TechCrunch in a phone conversation. “Moreover, our primary frustration with numerous weather apps is that you merely receive their best estimate, and you lack clarity on how confident they are.”

Understanding the alternatives enables individuals to prepare for significant events, he indicated.

“I find it particularly beneficial for winter storms, where, perhaps the storm begins in the morning and you’ll experience snow, but there’s also a chance it may be delayed until later in the afternoon — resulting in rain,” Gross explained. “Being capable of witnessing that directly on the timeline provides an intuitive understanding of whether all the models concur and you’re set for snow, or if some predict snow while others forecast rain,” he added.

This kind of meteorological information might yield a valuable asset, not only for consumers but also for other developers.

At Dark Sky, the team had provided a weather API to developers for a fee. Following its acquisition by Apple, the group focused on creating WeatherKit, the toolkit for developers granting access to Apple’s weather data through subscription. Grossman mentioned that the team has not yet determined whether a developer API will be included in Acme Weather’s offerings.

Instead, Acme Weather is a consumer application priced at $25 a year, accompanied by a two-week free trial. This helps offset the expenses associated with integrating various weather models and resources, which can be quite costly.

“Most of our effort has been dedicated to constructing our own forecasts — effectively our own data provider. And this enables us to perform actions such as generating multiple forecasts … [and] develop any map we desire, rather than depending on a third-party map provider,” Grossman remarked.

Upon launching, the app provides a variety of maps, including radar, lightning, rain and snow accumulations, along with wind, temperature, humidity, cloud coverage, and hurricane paths.

Another function, Community Reports, allows users to share insights about their present conditions to enhance the app’s real-time weather reporting.

Image Credits:Acme Weather

While Dark Sky became a beloved weather application due to its remarkable accuracy in predicting when it would rain in your area, Acme Weather strives to enhance this and even introduce some playful elements.

The app features built-in alerts for standard phenomena such as rain, nearby lightning, community reports, government-issued extreme weather warnings, and more. It will also test alerts for forecasts like rainbow predictions or indications of a splendid sunset.

These features will be accessible in a dedicated “Acme Labs” segment of the app, and Grossman mentioned they would exercise caution with their estimations, due to the inherent challenges.

Image Credits:Acme Weather

Users will also have the option to tailor their notifications to concentrate on weather occurrences that interest them, such as wind conditions or UV index, or the likelihood of rain within the next day.

The chance to experiment with new concepts is part of what motivated the team to return to indie app development, Grossman stated.

“I genuinely appreciate Apple … but as a large organization, it can be challenging to attempt unconventional, innovative ideas. When serving a billion users, mistakes carry a hefty price,” he conveyed to TechCrunch. “Long software development timelines exist, with numerous stakeholders involved; the prospect of testing various concepts is, I think, fascinating.”

Acme Weather is presently available on iOS. An Android version is in the works.

The team operates on a bootstrapped basis and consists of co-founders Josh Reyes and Dan Abrutyn, who also were part of Dark Sky. The compact team includes both former Dark Sky employees and recent hires.

Anthropic claims Chinese AI laboratories are extracting information from Claude as the US discusses AI chip shipments.

Anthropic claims Chinese AI laboratories are extracting information from Claude as the US discusses AI chip shipments.

Anthropic is claiming that three Chinese AI firms have established over 24,000 fraudulent accounts using its Claude AI model to enhance their own systems.

The companies — DeepSeek, Moonshot AI, and MiniMax — reportedly created over 16 million interactions with Claude via these accounts utilizing a method known as “distillation.” Anthropic accuses the labs of focusing on Claude’s most unique features: agentic reasoning, tool application, and coding.

These allegations arise during discussions on the enforcement of export regulations regarding advanced AI chips, a strategy intended to restrict China’s AI advancement. 

Distillation is a standard training technique employed by AI laboratories on their models to produce smaller, cost-effective iterations, though rivals can exploit it to essentially replicate the efforts of other labs. Earlier this month, OpenAI sent a notice to House members alleging that DeepSeek used distillation to imitate its offerings. 

DeepSeek gained attention last year when it introduced its open source R1 reasoning model, which nearly equaled the performance of leading American lab models at a significantly lower cost. The company is anticipated to soon unveil DeepSeek V4, its newest model, which reportedly has the capability to outperform both Anthropic’s Claude and OpenAI’s ChatGPT in coding.

The extent of each infringement varied. Anthropic identified over 150,000 interactions from DeepSeek aimed at enhancing fundamental logic and alignment, particularly concerning censorship-resistant alternatives for sensitive policy queries. 

Moonshot AI had over 3.4 million interactions directed at agentic reasoning and tool usage, coding and data analysis, the development of computer-use agents, and computer vision. Last month, the company launched a new open source model, Kimi K2.5, along with a coding agent.

Techcrunch event

Boston, MA
|
June 9, 2026

MiniMax’s 13 million interactions focused on agentic coding and tool application and orchestration. Anthropic noted it could observe MiniMax as it redirected nearly half its traffic to extract features from the newly launched Claude model. 

Anthropic states it will persist in developing defenses that complicate the execution and detection of distillation attacks, whilst urging for “a collaborative approach across the AI industry, cloud service providers, and legislators.”  

The distillation attacks occur amidst ongoing debates regarding American chip exports to China. Last month, the Trump administration officially permitted U.S. firms like Nvidia to export advanced AI chips (such as the H200) to China. Critics contend that this relaxation of export restrictions enhances China’s AI computing power at a crucial juncture in the global competition for AI supremacy.

Anthropic asserts that the level of extraction undertaken by DeepSeek, MiniMax, and Moonshot “necessitates access to advanced chips.”

“Distillation attacks thereby strengthen the justification for export controls: restricted chip access limits both direct model training and the scale of unlawful distillation,” according to Anthropic’s blog. 

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, informed TechCrunch he was not surprised by these attacks.

“It has been evident for a while that part of the reason for the swift advancement of Chinese AI models has been the pilfering via distillation of U.S. leading models. Now we have confirmation,” Alperovitch stated. “This should provide even stronger justifications for refusing to sell any AI chips to any of these [companies], which would only give them further advantages.”

Anthropic further mentioned that distillation not only poses a threat to undermine American AI leadership, but also could introduce national security hazards.

“Anthropic and other U.S. companies create systems that deter state and non-state actors from leveraging AI to, for instance, produce bioweapons or engage in harmful cyber activities,” states Anthropic’s blog entry. “Models developed through illicit distillation are unlikely to maintain those safeguards, meaning perilous capabilities could spread with many protections completely removed.”

Anthropic highlighted authoritarian regimes utilizing advanced AI for purposes such as “offensive cyber operations, disinformation efforts, and mass surveillance,” a danger that escalates if those models are open sourced.

TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for their responses.

Uber aims to become a multifunctional tool for robotaxis.

Uber aims to become a multifunctional tool for robotaxis.

Uber presents a proposal to makers of self-driving vehicles: we’ve got it covered.

The company specializing in ride-hailing and food delivery has introduced a new division named Uber Autonomous Solutions, aimed at managing all aspects of running a robotaxi, self-driving truck, or sidewalk delivery robot venture, encompassing software and support services.

This initiative, revealed on Monday, formally establishes what Uber has been discreetly developing over recent years.

Uber has formed collaborations with almost twenty autonomous vehicle technology companies across the board, from robotaxis and trucking to sidewalk delivery robots and drones. The company has financially supported many of these businesses — such as Lucid, Nuro, Waabi, and China’s WeRide — invested $100 million to develop fast-charging stations for autonomous vehicles, and even created Uber AV Labs, a dedicated engineering team focused on collecting data for robotaxi collaborators.

Uber has established these partnerships and investments; now it aims to position itself as essential.

“AV tech teams should concentrate on their core strengths: developing software that can safely operate in an autonomous landscape,” stated Sarfraz Maredia, Uber’s global head of autonomous mobility and delivery, who will oversee the initiative. The goal, he explained, is to provide “operational support wherever needed,” including generating demand, enhancing rider experience, customer assistance, or overseeing daily fleet management.

The ultimate aim is to aid these companies in lowering their per-mile expenses and expediting their market introduction. Uber announced plans to assist its partners in scaling robotaxi operations to over 15 cities by the end of this year.

Techcrunch event

Boston, MA
|
June 9, 2026

“The key to autonomous success or failure in the world lies in its commercialization, and Uber will be pivotal in making autonomy commercially viable,” stated Uber President and COO Andrew MacDonald.

For Uber, this entails managing infrastructure aspects such as training data, mapping, fleet financing, regulatory compliance, and overseeing how robotaxis and other AVs navigate intricate situations and locations. The firm mentioned it uses a fleet of specially outfitted Lucid vehicles to gather data that can be shared with partners to train their AI systems.

The new division also aims to enhance user experience, including customer service. Importantly, Uber intends to assume control over fleet management, which would involve remote assistance — an area that has drawn scrutiny from federal legislators due to concerns over Waymo’s use of overseas workers. Fleet management would also encompass offering insurance and hiring personnel who may be required to support these AVs in real-world scenarios.

Uber’s action is both a matter of survival and seizing opportunities. The company divested its internal AV development unit, known as Uber ATG, in 2020, following two years of internal challenges and the fallout from one of its test vehicles fatally striking a pedestrian. (Uber sold the division in a complex arrangement with Aurora.)

It has worked to bolster its standing through partnerships and investments, and there have been numerous. Uber and Waymo operate a joint robotaxi service in Atlanta and Austin. The company has also secured collaborations with Chinese companies such as Baidu, Momenta, and Pony.ai, sidewalk delivery bot firms like Cartken, Starship, and Serve, and the UK-based automated driving tech startup Wayve, alongside robotaxi developers AVride and Motional, among others. Additionally, it plans to initiate a robotaxi service with Volkswagen in Los Angeles by late 2026 — although it won’t be fully driverless until 2027.

These partnerships do offer Uber some level of protection, but they do not offset any revenue lost should these companies undermine its current ride-hailing and food delivery business reliant on human drivers. Uber hopes this new division will fill that gap.

Loading the player…

Google's Cloud AI excels in the three domains of model proficiency

Google’s Cloud AI excels in the three domains of model proficiency

As a VP of product at Google Cloud, Michael Gerstenhaber primarily focuses on Vertex AI, the company’s integrated platform for implementing enterprise AI. This role provides him with an overarching perspective on how businesses are truly utilizing AI models and what remains to be accomplished to unlock the potential of agentic AI.

During my conversation with Gerstenhaber, one particular concept caught my attention — one I had not encountered before. He explained that AI models are challenging three boundaries simultaneously: raw intelligence, latency, and a third aspect that relates less to pure capability and more to cost — whether a model can be implemented affordably enough to operate at large, unpredictable scales. This offers a fresh perspective on model capacities, particularly beneficial for those aiming to steer frontier models in a new trajectory.

This interview has been modified for brevity and clarity.

Could you begin by sharing your journey in AI thus far, and your role at Google?

I have been engaged in AI for around two years. I spent one and a half years at Anthropic and have been with Google for nearly six months. I lead Vertex AI, which is Google’s developer platform. The majority of our clients are developers creating their own applications. They seek accessibility to agentic patterns and platforms, as well as access to the inferences of the finest models globally. I provide that, but the applications are developed by Shopify, Thomson Reuters, and our various clients in their respective fields.

What attracted you to Google?

I believe Google is exceptional in that it offers everything from the user interface to the infrastructure base. We can construct data centers, procure electricity, and create power plants. We possess our own chips and models, and we manage the inference layer. We oversee the agentic layer as well. We have APIs for memory and for intertwined code writing, along with an agent engine that ensures compliance and governance. Furthermore, we even have the chat interface with Gemini enterprise and Gemini chat for consumers. Therefore, one of the reasons I joined here is that I perceive Google as uniquely vertically integrated, which serves as an advantage for us.

Techcrunch event

Boston, MA
|
June 9, 2026

It’s interesting because, despite the variations between companies, it appears that all three of the major labs are quite similar in capabilities. Is it merely a competition for greater intelligence, or is it more nuanced?

I identify three barriers. Models like Gemini Pro are optimized for raw intelligence. Consider the task of writing code. All you desire is the finest code achievable, regardless of whether it takes 45 minutes, as it must be maintained and deployed. The goal is simply to obtain the best.

Then there’s another barrier relating to latency. If I am providing customer support and require guidance on applying a policy, intelligence is necessary to enact that policy. Are you authorized to process a return? Can I change my seat on an airplane? However, it is irrelevant how correct your information is if it takes 45 minutes to receive an answer. Thus, for such scenarios, you need the most intelligent product within that latency threshold because excess intelligence becomes insignificant once the individual becomes frustrated and hangs up the call. 

Finally, there’s one more category, where entities like Reddit or Meta aim to oversee the entire internet. They possess substantial budgets, yet they cannot take enterprise risks on something when scalability is uncertain. They cannot predict how many harmful posts will emerge today or tomorrow. Consequently, they must limit their budget to a model with the highest intelligence they can sustain while ensuring it scales effectively across an infinite range of subjects. Thus, cost becomes exceedingly critical.

I’ve been contemplating why agentic systems are slow to gain acceptance. It seems that the models are available, and I’ve witnessed astonishing demonstrations, yet the significant changes I anticipated a year ago have not materialized. What do you believe is hindering progress?

This technology is fundamentally around two years old, and there is still a considerable lack of infrastructure. We lack methodologies for auditing agent actions. We don’t have frameworks for authorizing data to an agent. These methodologies will necessitate development to be implemented in production. And production often lags behind what technology can achieve. So, two years is insufficient to observe what the intelligence can support in a production environment, and that’s where the challenges lie.

The advancement has been remarkably swift in software engineering since it integrates smoothly with the software development lifecycle. We have a development environment in which it’s acceptable to experiment, followed by a transition from a dev environment to a testing environment. At Google, the code writing process requires two individuals to review the code and both to confirm that it meets the standards to be associated with Google’s name and presented to our clients. Therefore, we have many of those human-in-the-loop procedures that significantly reduce the implementation risk. However, we need to create those methodologies in other areas and for various professions.

Americans are vandalizing Flock surveillance cameras

Americans are vandalizing Flock surveillance cameras

Brian Merchant, reporting for Blood in the Machine, notes that individuals throughout the United States are tearing down and damaging Flock surveillance cameras, fueled by escalating public outrage that the license plate readers assist U.S. immigration officials and deportations.

Flock is a surveillance startup based in Atlanta that was valued at $7.5 billion last year and produces license plate readers. It has been criticized for granting federal authorities access to its extensive network of nationwide license plate readers and databases at a time when U.S. Immigration and Customs Enforcement increasingly depends on data for community raids as part of the Trump administration’s immigration enforcement efforts.

Flock cameras enable authorities to monitor individuals’ movements by capturing images of their license plates from a multitude of cameras spread across the United States. Flock asserts that it does not share data directly with ICE, but reports indicate that local police have provided federal authorities with their own access to Flock’s cameras and databases.

While various communities are urging their municipalities to terminate contracts with Flock, others are taking action independently.

Merchant highlights occurrences of damage to Flock cameras in La Mesa, California, mere weeks after the city council agreed to extend the deployment of Flock cameras in the city, notwithstanding a decisive majority of participants advocating for their removal. A local report mentioned significant opposition to the surveillance technology, with residents expressing privacy concerns.

Instances of vandalism have been reported from California and Connecticut to Illinois and Virginia. In Oregon, six license plate-reading cameras mounted on poles were severed, and at least one was spray-painted. A message left at the base of the cut poles read, “Hahaha get wrecked ya surveilling fucks,” according to Merchant.

As per DeFlock, a project focused on mapping license plate readers, nearly 80,000 cameras exist across the United States. Numerous cities have thus far rejected Flock’s cameras, and some police departments have blocked federal authorities from utilizing their resources.

A spokesperson from Flock did not specify, when contacted by TechCrunch, whether the company monitors how many cameras have been damaged since their installation.

OpenAI brings in the consultants for its business initiative.

OpenAI brings in the consultants for its business initiative.

OpenAI is enhancing collaborations with four key consulting firms as the AI organization aims to expand its enterprise sector in 2026.

On Monday, OpenAI revealed the “Frontier Alliances,” indicating that the AI laboratory is open to exploring various strategies to encourage enterprises to significantly adopt its technology. This alliance features long-term collaborations between OpenAI and four leading consulting companies: Boston Consulting Group (BCG), McKinsey, Accenture, and Capgemini, aimed at promoting its enterprise offerings.

The Forward Deployed Engineering team at OpenAI will partner with these consulting leaders to assist them in integrating OpenAI’s enterprise-centric technologies, such as OpenAI Frontier, into their clients’ technology infrastructures.

OpenAI introduced OpenAI Frontier in early February. This no-code open software enables users to create, deploy, and manage AI agents that are built on OpenAI’s AI models and other frameworks.

In its most recent announcement, OpenAI contends that consulting firms are the ideal channels for onboarding enterprises.

“AI alone does not drive transformation. It must be linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives and culture to deliver sustained outcomes,” BCG CEO Christoph Schweizer stated in OpenAI’s blog. “Our extended partnership integrates OpenAI’s Frontier platform with BCG’s extensive industry, functional, and technological expertise along with BCG X’s capabilities for building and scaling to generate measurable impact with safeguards from the outset.”

So far, the pace of enterprise adoption of AI has been relatively sluggish as these organizations face challenges in attaining a meaningful return on investment from their AI initiatives.

Techcrunch event

Boston, MA
|
June 9, 2026

OpenAI’s collaborative approach is logical and extends beyond merely encouraging enterprises to incorporate AI into their current workflows. This initiative prioritizes consultants in convincing organizations to adjust their strategies and operations to integrate OpenAI’s tools where applicable.

It is noteworthy that OpenAI’s competitor Anthropic has also signed agreements with major consulting firms, including Deloitte and Accenture, in recent months.

Company CFO Sarah Friar expressed in a blog post in January that the enterprise segment is a significant focus for OpenAI in 2026. Furthermore, OpenAI has secured substantial enterprise AI agreements with Snowflake and ServiceNow so far this year, alongside appointing Barret Zoph to head the company’s enterprise sales division in January.

Guide Labs introduces a novel type of interpretable LLM

Guide Labs introduces a novel type of interpretable LLM

The difficulty in managing a deep learning model is often deciphering why it behaves the way it does: be it xAI’s continuous attempts to refine Grok’s peculiar politics, ChatGPT’s issues with flattery, or common hallucinations, navigating through a neural network with billions of parameters is challenging.

Guide Labs, a startup based in San Francisco and led by CEO Julius Adebayo and chief science officer Aya Abdelsalam Ismail, is presenting a solution to this dilemma today. On Monday, the firm made public an 8-billion-parameter LLM, Steerling-8B, trained using a fresh architecture aimed at making its actions straightforwardly interpretable: Each token generated by the model can be traced back to its roots within the training data of the LLM.

This can range from simply identifying the reference materials for facts referenced by the model, to more intricate tasks such as grasping the model’s concept of humor or gender.

“If I possess a trillion ways to encode gender, and I utilize 1 billion of those trillion options, it’s essential to ensure that you discover all those 1 billion elements I’ve encoded, and then you need to be capable of turning them on or off reliably,” Adebayo conveyed to TechCrunch. “Current models can accomplish this, but it’s highly unstable… It’s essentially one of the ultimate questions.”

Adebayo commenced this research while pursuing his PhD at MIT, co-authoring a well-referenced paper in 2018 that demonstrated that existing techniques for comprehending deep learning models were unreliable. This endeavor eventually resulted in a novel method of constructing LLMs: Developers embed a concept layer within the model that categorizes data into traceable segments. Although this necessitates greater upfront data labeling, they leveraged other AI models to assist, allowing them to train this model as their most significant proof of concept thus far.

“The type of interpretability that individuals generally perform is… neuroscience on a model, and we turn that around,” Adebayo stated. “What we do is actually construct the model from the foundations up so that you don’t require neuroscience.”

Image Credits:Guide Labs

One concern regarding this methodology is that it may remove some emergent behaviors that render LLMs fascinating: Their capability to generalize in innovative ways about concepts not previously encountered. Adebayo asserts that this still occurs in his company’s model: His team monitors what they term “discovered concepts” that the model unearthed autonomously, such as quantum computing.

Techcrunch event

Boston, MA
|
June 9, 2026

Adebayo contends that this interpretable architecture will be essential for everyone. For consumer-oriented LLMs, these strategies should empower model creators to undertake actions such as blocking copyrighted materials or improving output control concerning topics like violence or substance abuse. Regulated sectors will necessitate more manageable LLMs — for instance, in finance — where a model assessing loan candidates must focus on aspects like financial histories while disregarding race. Additionally, there is a demand for interpretability within scientific endeavors, another domain where Guide Labs has innovated technology. Protein folding has seen substantial success with deep learning models, yet researchers require deeper understanding regarding why their software identifies promising combinations.

“This model exemplifies that training interpretable models is no longer merely a scientific inquiry; it has now become an engineering challenge,” Adebayo remarked. “We have deciphered the science, and we can scale them. There’s no reason this kind of model shouldn’t achieve performance on par with frontier-level models,” which contain significantly more parameters.

Guide Labs states that Steerling-8B can attain 90% of the capabilities of current models while utilizing less training data, due to its innovative architecture. The next phase for the company, which originated from Y Combinator and gathered a $9 million seed funding from Initialized Capital in November 2024, is to construct a larger model and start providing users with API and agentic access.

“The current method of training models is quite primitive, thus making the democratization of inherent interpretability a long-term benefit for our role within humanity,” Adebayo conveyed to TechCrunch. “As we pursue models that will possess superintelligence, you don’t want an entity making choices on your behalf that remains somewhat enigmatic to you.”