Ex-Apple group unveils Acme Weather, a fresh approach to forecasting the weather

Ex-Apple group unveils Acme Weather, a fresh approach to forecasting the weather

The developers behind Dark Sky, who transferred their well-liked weather application to Apple in March 2020, have returned with a fresh perspective on weather forecasting. The group recently revealed the introduction of their new application, Acme Weather, which they assert provides a superior and more dependable forecast than the one they offered at Dark Sky. Additionally, the app will feature an array of distinctive weather notifications, including enjoyable alerts for rainbows and stunning sunsets.

In contrast to standard weather applications, Acme Weather’s forecast is enhanced with various alternative predictions to enhance precision.

Image Credits:Acme Weather

Adam Grossman, co-founder of Dark Sky, elaborates in a welcome blog entry that the app’s proprietary forecasts will utilize various numerical weather prediction models, satellite information, ground station data, and radar inputs, rendering its forecasts fairly trustworthy.

Additionally, the app will present supplementary forecast lines illustrating other potential outcomes as gray lines displayed on its charts.

Image Credits:Acme Weather

“Forecasts can frequently be inaccurate — it’s the weather, after all! It’s among the most challenging elements to forecast,” Grossman shared with TechCrunch in a phone conversation. “Moreover, our primary frustration with numerous weather apps is that you merely receive their best estimate, and you lack clarity on how confident they are.”

Understanding the alternatives enables individuals to prepare for significant events, he indicated.

“I find it particularly beneficial for winter storms, where, perhaps the storm begins in the morning and you’ll experience snow, but there’s also a chance it may be delayed until later in the afternoon — resulting in rain,” Gross explained. “Being capable of witnessing that directly on the timeline provides an intuitive understanding of whether all the models concur and you’re set for snow, or if some predict snow while others forecast rain,” he added.

This kind of meteorological information might yield a valuable asset, not only for consumers but also for other developers.

At Dark Sky, the team had provided a weather API to developers for a fee. Following its acquisition by Apple, the group focused on creating WeatherKit, the toolkit for developers granting access to Apple’s weather data through subscription. Grossman mentioned that the team has not yet determined whether a developer API will be included in Acme Weather’s offerings.

Instead, Acme Weather is a consumer application priced at $25 a year, accompanied by a two-week free trial. This helps offset the expenses associated with integrating various weather models and resources, which can be quite costly.

“Most of our effort has been dedicated to constructing our own forecasts — effectively our own data provider. And this enables us to perform actions such as generating multiple forecasts … [and] develop any map we desire, rather than depending on a third-party map provider,” Grossman remarked.

Upon launching, the app provides a variety of maps, including radar, lightning, rain and snow accumulations, along with wind, temperature, humidity, cloud coverage, and hurricane paths.

Another function, Community Reports, allows users to share insights about their present conditions to enhance the app’s real-time weather reporting.

Image Credits:Acme Weather

While Dark Sky became a beloved weather application due to its remarkable accuracy in predicting when it would rain in your area, Acme Weather strives to enhance this and even introduce some playful elements.

The app features built-in alerts for standard phenomena such as rain, nearby lightning, community reports, government-issued extreme weather warnings, and more. It will also test alerts for forecasts like rainbow predictions or indications of a splendid sunset.

These features will be accessible in a dedicated “Acme Labs” segment of the app, and Grossman mentioned they would exercise caution with their estimations, due to the inherent challenges.

Image Credits:Acme Weather

Users will also have the option to tailor their notifications to concentrate on weather occurrences that interest them, such as wind conditions or UV index, or the likelihood of rain within the next day.

The chance to experiment with new concepts is part of what motivated the team to return to indie app development, Grossman stated.

“I genuinely appreciate Apple … but as a large organization, it can be challenging to attempt unconventional, innovative ideas. When serving a billion users, mistakes carry a hefty price,” he conveyed to TechCrunch. “Long software development timelines exist, with numerous stakeholders involved; the prospect of testing various concepts is, I think, fascinating.”

Acme Weather is presently available on iOS. An Android version is in the works.

The team operates on a bootstrapped basis and consists of co-founders Josh Reyes and Dan Abrutyn, who also were part of Dark Sky. The compact team includes both former Dark Sky employees and recent hires.

Anthropic claims Chinese AI laboratories are extracting information from Claude as the US discusses AI chip shipments.

Anthropic claims Chinese AI laboratories are extracting information from Claude as the US discusses AI chip shipments.

Anthropic is claiming that three Chinese AI firms have established over 24,000 fraudulent accounts using its Claude AI model to enhance their own systems.

The companies — DeepSeek, Moonshot AI, and MiniMax — reportedly created over 16 million interactions with Claude via these accounts utilizing a method known as “distillation.” Anthropic accuses the labs of focusing on Claude’s most unique features: agentic reasoning, tool application, and coding.

These allegations arise during discussions on the enforcement of export regulations regarding advanced AI chips, a strategy intended to restrict China’s AI advancement. 

Distillation is a standard training technique employed by AI laboratories on their models to produce smaller, cost-effective iterations, though rivals can exploit it to essentially replicate the efforts of other labs. Earlier this month, OpenAI sent a notice to House members alleging that DeepSeek used distillation to imitate its offerings. 

DeepSeek gained attention last year when it introduced its open source R1 reasoning model, which nearly equaled the performance of leading American lab models at a significantly lower cost. The company is anticipated to soon unveil DeepSeek V4, its newest model, which reportedly has the capability to outperform both Anthropic’s Claude and OpenAI’s ChatGPT in coding.

The extent of each infringement varied. Anthropic identified over 150,000 interactions from DeepSeek aimed at enhancing fundamental logic and alignment, particularly concerning censorship-resistant alternatives for sensitive policy queries. 

Moonshot AI had over 3.4 million interactions directed at agentic reasoning and tool usage, coding and data analysis, the development of computer-use agents, and computer vision. Last month, the company launched a new open source model, Kimi K2.5, along with a coding agent.

Techcrunch event

Boston, MA
|
June 9, 2026

MiniMax’s 13 million interactions focused on agentic coding and tool application and orchestration. Anthropic noted it could observe MiniMax as it redirected nearly half its traffic to extract features from the newly launched Claude model. 

Anthropic states it will persist in developing defenses that complicate the execution and detection of distillation attacks, whilst urging for “a collaborative approach across the AI industry, cloud service providers, and legislators.”  

The distillation attacks occur amidst ongoing debates regarding American chip exports to China. Last month, the Trump administration officially permitted U.S. firms like Nvidia to export advanced AI chips (such as the H200) to China. Critics contend that this relaxation of export restrictions enhances China’s AI computing power at a crucial juncture in the global competition for AI supremacy.

Anthropic asserts that the level of extraction undertaken by DeepSeek, MiniMax, and Moonshot “necessitates access to advanced chips.”

“Distillation attacks thereby strengthen the justification for export controls: restricted chip access limits both direct model training and the scale of unlawful distillation,” according to Anthropic’s blog. 

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, informed TechCrunch he was not surprised by these attacks.

“It has been evident for a while that part of the reason for the swift advancement of Chinese AI models has been the pilfering via distillation of U.S. leading models. Now we have confirmation,” Alperovitch stated. “This should provide even stronger justifications for refusing to sell any AI chips to any of these [companies], which would only give them further advantages.”

Anthropic further mentioned that distillation not only poses a threat to undermine American AI leadership, but also could introduce national security hazards.

“Anthropic and other U.S. companies create systems that deter state and non-state actors from leveraging AI to, for instance, produce bioweapons or engage in harmful cyber activities,” states Anthropic’s blog entry. “Models developed through illicit distillation are unlikely to maintain those safeguards, meaning perilous capabilities could spread with many protections completely removed.”

Anthropic highlighted authoritarian regimes utilizing advanced AI for purposes such as “offensive cyber operations, disinformation efforts, and mass surveillance,” a danger that escalates if those models are open sourced.

TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for their responses.

Uber aims to become a multifunctional tool for robotaxis.

Uber aims to become a multifunctional tool for robotaxis.

Uber presents a proposal to makers of self-driving vehicles: we’ve got it covered.

The company specializing in ride-hailing and food delivery has introduced a new division named Uber Autonomous Solutions, aimed at managing all aspects of running a robotaxi, self-driving truck, or sidewalk delivery robot venture, encompassing software and support services.

This initiative, revealed on Monday, formally establishes what Uber has been discreetly developing over recent years.

Uber has formed collaborations with almost twenty autonomous vehicle technology companies across the board, from robotaxis and trucking to sidewalk delivery robots and drones. The company has financially supported many of these businesses — such as Lucid, Nuro, Waabi, and China’s WeRide — invested $100 million to develop fast-charging stations for autonomous vehicles, and even created Uber AV Labs, a dedicated engineering team focused on collecting data for robotaxi collaborators.

Uber has established these partnerships and investments; now it aims to position itself as essential.

“AV tech teams should concentrate on their core strengths: developing software that can safely operate in an autonomous landscape,” stated Sarfraz Maredia, Uber’s global head of autonomous mobility and delivery, who will oversee the initiative. The goal, he explained, is to provide “operational support wherever needed,” including generating demand, enhancing rider experience, customer assistance, or overseeing daily fleet management.

The ultimate aim is to aid these companies in lowering their per-mile expenses and expediting their market introduction. Uber announced plans to assist its partners in scaling robotaxi operations to over 15 cities by the end of this year.

Techcrunch event

Boston, MA
|
June 9, 2026

“The key to autonomous success or failure in the world lies in its commercialization, and Uber will be pivotal in making autonomy commercially viable,” stated Uber President and COO Andrew MacDonald.

For Uber, this entails managing infrastructure aspects such as training data, mapping, fleet financing, regulatory compliance, and overseeing how robotaxis and other AVs navigate intricate situations and locations. The firm mentioned it uses a fleet of specially outfitted Lucid vehicles to gather data that can be shared with partners to train their AI systems.

The new division also aims to enhance user experience, including customer service. Importantly, Uber intends to assume control over fleet management, which would involve remote assistance — an area that has drawn scrutiny from federal legislators due to concerns over Waymo’s use of overseas workers. Fleet management would also encompass offering insurance and hiring personnel who may be required to support these AVs in real-world scenarios.

Uber’s action is both a matter of survival and seizing opportunities. The company divested its internal AV development unit, known as Uber ATG, in 2020, following two years of internal challenges and the fallout from one of its test vehicles fatally striking a pedestrian. (Uber sold the division in a complex arrangement with Aurora.)

It has worked to bolster its standing through partnerships and investments, and there have been numerous. Uber and Waymo operate a joint robotaxi service in Atlanta and Austin. The company has also secured collaborations with Chinese companies such as Baidu, Momenta, and Pony.ai, sidewalk delivery bot firms like Cartken, Starship, and Serve, and the UK-based automated driving tech startup Wayve, alongside robotaxi developers AVride and Motional, among others. Additionally, it plans to initiate a robotaxi service with Volkswagen in Los Angeles by late 2026 — although it won’t be fully driverless until 2027.

These partnerships do offer Uber some level of protection, but they do not offset any revenue lost should these companies undermine its current ride-hailing and food delivery business reliant on human drivers. Uber hopes this new division will fill that gap.

Loading the player…

Google's Cloud AI excels in the three domains of model proficiency

Google’s Cloud AI excels in the three domains of model proficiency

As a VP of product at Google Cloud, Michael Gerstenhaber primarily focuses on Vertex AI, the company’s integrated platform for implementing enterprise AI. This role provides him with an overarching perspective on how businesses are truly utilizing AI models and what remains to be accomplished to unlock the potential of agentic AI.

During my conversation with Gerstenhaber, one particular concept caught my attention — one I had not encountered before. He explained that AI models are challenging three boundaries simultaneously: raw intelligence, latency, and a third aspect that relates less to pure capability and more to cost — whether a model can be implemented affordably enough to operate at large, unpredictable scales. This offers a fresh perspective on model capacities, particularly beneficial for those aiming to steer frontier models in a new trajectory.

This interview has been modified for brevity and clarity.

Could you begin by sharing your journey in AI thus far, and your role at Google?

I have been engaged in AI for around two years. I spent one and a half years at Anthropic and have been with Google for nearly six months. I lead Vertex AI, which is Google’s developer platform. The majority of our clients are developers creating their own applications. They seek accessibility to agentic patterns and platforms, as well as access to the inferences of the finest models globally. I provide that, but the applications are developed by Shopify, Thomson Reuters, and our various clients in their respective fields.

What attracted you to Google?

I believe Google is exceptional in that it offers everything from the user interface to the infrastructure base. We can construct data centers, procure electricity, and create power plants. We possess our own chips and models, and we manage the inference layer. We oversee the agentic layer as well. We have APIs for memory and for intertwined code writing, along with an agent engine that ensures compliance and governance. Furthermore, we even have the chat interface with Gemini enterprise and Gemini chat for consumers. Therefore, one of the reasons I joined here is that I perceive Google as uniquely vertically integrated, which serves as an advantage for us.

Techcrunch event

Boston, MA
|
June 9, 2026

It’s interesting because, despite the variations between companies, it appears that all three of the major labs are quite similar in capabilities. Is it merely a competition for greater intelligence, or is it more nuanced?

I identify three barriers. Models like Gemini Pro are optimized for raw intelligence. Consider the task of writing code. All you desire is the finest code achievable, regardless of whether it takes 45 minutes, as it must be maintained and deployed. The goal is simply to obtain the best.

Then there’s another barrier relating to latency. If I am providing customer support and require guidance on applying a policy, intelligence is necessary to enact that policy. Are you authorized to process a return? Can I change my seat on an airplane? However, it is irrelevant how correct your information is if it takes 45 minutes to receive an answer. Thus, for such scenarios, you need the most intelligent product within that latency threshold because excess intelligence becomes insignificant once the individual becomes frustrated and hangs up the call. 

Finally, there’s one more category, where entities like Reddit or Meta aim to oversee the entire internet. They possess substantial budgets, yet they cannot take enterprise risks on something when scalability is uncertain. They cannot predict how many harmful posts will emerge today or tomorrow. Consequently, they must limit their budget to a model with the highest intelligence they can sustain while ensuring it scales effectively across an infinite range of subjects. Thus, cost becomes exceedingly critical.

I’ve been contemplating why agentic systems are slow to gain acceptance. It seems that the models are available, and I’ve witnessed astonishing demonstrations, yet the significant changes I anticipated a year ago have not materialized. What do you believe is hindering progress?

This technology is fundamentally around two years old, and there is still a considerable lack of infrastructure. We lack methodologies for auditing agent actions. We don’t have frameworks for authorizing data to an agent. These methodologies will necessitate development to be implemented in production. And production often lags behind what technology can achieve. So, two years is insufficient to observe what the intelligence can support in a production environment, and that’s where the challenges lie.

The advancement has been remarkably swift in software engineering since it integrates smoothly with the software development lifecycle. We have a development environment in which it’s acceptable to experiment, followed by a transition from a dev environment to a testing environment. At Google, the code writing process requires two individuals to review the code and both to confirm that it meets the standards to be associated with Google’s name and presented to our clients. Therefore, we have many of those human-in-the-loop procedures that significantly reduce the implementation risk. However, we need to create those methodologies in other areas and for various professions.

Americans are vandalizing Flock surveillance cameras

Americans are vandalizing Flock surveillance cameras

Brian Merchant, reporting for Blood in the Machine, notes that individuals throughout the United States are tearing down and damaging Flock surveillance cameras, fueled by escalating public outrage that the license plate readers assist U.S. immigration officials and deportations.

Flock is a surveillance startup based in Atlanta that was valued at $7.5 billion last year and produces license plate readers. It has been criticized for granting federal authorities access to its extensive network of nationwide license plate readers and databases at a time when U.S. Immigration and Customs Enforcement increasingly depends on data for community raids as part of the Trump administration’s immigration enforcement efforts.

Flock cameras enable authorities to monitor individuals’ movements by capturing images of their license plates from a multitude of cameras spread across the United States. Flock asserts that it does not share data directly with ICE, but reports indicate that local police have provided federal authorities with their own access to Flock’s cameras and databases.

While various communities are urging their municipalities to terminate contracts with Flock, others are taking action independently.

Merchant highlights occurrences of damage to Flock cameras in La Mesa, California, mere weeks after the city council agreed to extend the deployment of Flock cameras in the city, notwithstanding a decisive majority of participants advocating for their removal. A local report mentioned significant opposition to the surveillance technology, with residents expressing privacy concerns.

Instances of vandalism have been reported from California and Connecticut to Illinois and Virginia. In Oregon, six license plate-reading cameras mounted on poles were severed, and at least one was spray-painted. A message left at the base of the cut poles read, “Hahaha get wrecked ya surveilling fucks,” according to Merchant.

As per DeFlock, a project focused on mapping license plate readers, nearly 80,000 cameras exist across the United States. Numerous cities have thus far rejected Flock’s cameras, and some police departments have blocked federal authorities from utilizing their resources.

A spokesperson from Flock did not specify, when contacted by TechCrunch, whether the company monitors how many cameras have been damaged since their installation.

OpenAI brings in the consultants for its business initiative.

OpenAI brings in the consultants for its business initiative.

OpenAI is enhancing collaborations with four key consulting firms as the AI organization aims to expand its enterprise sector in 2026.

On Monday, OpenAI revealed the “Frontier Alliances,” indicating that the AI laboratory is open to exploring various strategies to encourage enterprises to significantly adopt its technology. This alliance features long-term collaborations between OpenAI and four leading consulting companies: Boston Consulting Group (BCG), McKinsey, Accenture, and Capgemini, aimed at promoting its enterprise offerings.

The Forward Deployed Engineering team at OpenAI will partner with these consulting leaders to assist them in integrating OpenAI’s enterprise-centric technologies, such as OpenAI Frontier, into their clients’ technology infrastructures.

OpenAI introduced OpenAI Frontier in early February. This no-code open software enables users to create, deploy, and manage AI agents that are built on OpenAI’s AI models and other frameworks.

In its most recent announcement, OpenAI contends that consulting firms are the ideal channels for onboarding enterprises.

“AI alone does not drive transformation. It must be linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives and culture to deliver sustained outcomes,” BCG CEO Christoph Schweizer stated in OpenAI’s blog. “Our extended partnership integrates OpenAI’s Frontier platform with BCG’s extensive industry, functional, and technological expertise along with BCG X’s capabilities for building and scaling to generate measurable impact with safeguards from the outset.”

So far, the pace of enterprise adoption of AI has been relatively sluggish as these organizations face challenges in attaining a meaningful return on investment from their AI initiatives.

Techcrunch event

Boston, MA
|
June 9, 2026

OpenAI’s collaborative approach is logical and extends beyond merely encouraging enterprises to incorporate AI into their current workflows. This initiative prioritizes consultants in convincing organizations to adjust their strategies and operations to integrate OpenAI’s tools where applicable.

It is noteworthy that OpenAI’s competitor Anthropic has also signed agreements with major consulting firms, including Deloitte and Accenture, in recent months.

Company CFO Sarah Friar expressed in a blog post in January that the enterprise segment is a significant focus for OpenAI in 2026. Furthermore, OpenAI has secured substantial enterprise AI agreements with Snowflake and ServiceNow so far this year, alongside appointing Barret Zoph to head the company’s enterprise sales division in January.

Guide Labs introduces a novel type of interpretable LLM

Guide Labs introduces a novel type of interpretable LLM

The difficulty in managing a deep learning model is often deciphering why it behaves the way it does: be it xAI’s continuous attempts to refine Grok’s peculiar politics, ChatGPT’s issues with flattery, or common hallucinations, navigating through a neural network with billions of parameters is challenging.

Guide Labs, a startup based in San Francisco and led by CEO Julius Adebayo and chief science officer Aya Abdelsalam Ismail, is presenting a solution to this dilemma today. On Monday, the firm made public an 8-billion-parameter LLM, Steerling-8B, trained using a fresh architecture aimed at making its actions straightforwardly interpretable: Each token generated by the model can be traced back to its roots within the training data of the LLM.

This can range from simply identifying the reference materials for facts referenced by the model, to more intricate tasks such as grasping the model’s concept of humor or gender.

“If I possess a trillion ways to encode gender, and I utilize 1 billion of those trillion options, it’s essential to ensure that you discover all those 1 billion elements I’ve encoded, and then you need to be capable of turning them on or off reliably,” Adebayo conveyed to TechCrunch. “Current models can accomplish this, but it’s highly unstable… It’s essentially one of the ultimate questions.”

Adebayo commenced this research while pursuing his PhD at MIT, co-authoring a well-referenced paper in 2018 that demonstrated that existing techniques for comprehending deep learning models were unreliable. This endeavor eventually resulted in a novel method of constructing LLMs: Developers embed a concept layer within the model that categorizes data into traceable segments. Although this necessitates greater upfront data labeling, they leveraged other AI models to assist, allowing them to train this model as their most significant proof of concept thus far.

“The type of interpretability that individuals generally perform is… neuroscience on a model, and we turn that around,” Adebayo stated. “What we do is actually construct the model from the foundations up so that you don’t require neuroscience.”

Image Credits:Guide Labs

One concern regarding this methodology is that it may remove some emergent behaviors that render LLMs fascinating: Their capability to generalize in innovative ways about concepts not previously encountered. Adebayo asserts that this still occurs in his company’s model: His team monitors what they term “discovered concepts” that the model unearthed autonomously, such as quantum computing.

Techcrunch event

Boston, MA
|
June 9, 2026

Adebayo contends that this interpretable architecture will be essential for everyone. For consumer-oriented LLMs, these strategies should empower model creators to undertake actions such as blocking copyrighted materials or improving output control concerning topics like violence or substance abuse. Regulated sectors will necessitate more manageable LLMs — for instance, in finance — where a model assessing loan candidates must focus on aspects like financial histories while disregarding race. Additionally, there is a demand for interpretability within scientific endeavors, another domain where Guide Labs has innovated technology. Protein folding has seen substantial success with deep learning models, yet researchers require deeper understanding regarding why their software identifies promising combinations.

“This model exemplifies that training interpretable models is no longer merely a scientific inquiry; it has now become an engineering challenge,” Adebayo remarked. “We have deciphered the science, and we can scale them. There’s no reason this kind of model shouldn’t achieve performance on par with frontier-level models,” which contain significantly more parameters.

Guide Labs states that Steerling-8B can attain 90% of the capabilities of current models while utilizing less training data, due to its innovative architecture. The next phase for the company, which originated from Y Combinator and gathered a $9 million seed funding from Initialized Capital in November 2024, is to construct a larger model and start providing users with API and agentic access.

“The current method of training models is quite primitive, thus making the democratization of inherent interpretability a long-term benefit for our role within humanity,” Adebayo conveyed to TechCrunch. “As we pursue models that will possess superintelligence, you don’t want an entity making choices on your behalf that remains somewhat enigmatic to you.”

Particle’s AI news application tunes into podcasts to find compelling snippets so you don’t need to.

Particle’s AI news application tunes into podcasts to find compelling snippets so you don’t need to.

An AI-driven news application named Particle, developed by ex-Twitter engineers, is now capable of keeping up with breaking news from both podcasts and online articles.

Just before its latest Android launch, Particle unveiled a functionality known as Podcast Clips, which finds the most captivating and pertinent moments from a variety of podcasts, incorporating those clips next to corresponding news articles in its feed.

This means instead of wading through an entire podcast to catch 45 seconds of engaging commentary, you can listen to the clip while perusing the news on Particle. Alternatively, you can opt to read the transcript of the clip, with the words highlighted in real-time as they are spoken.

Image Credits:Particle

“We basically capture that for every news item — if there’s a relevant podcast discussing it, we’ve gathered those segments,” stated Particle CEO Sara Beykpour, who was formerly Twitter’s Senior Director of Product Management, in an interview with TechCrunch. “It provides an engaging way to get insights on what people are discussing about a story while you’re reading it.”

This enhancement recognizes a transformation in the news landscape that has been developing over the years. More individuals are sourcing their news from podcasts, considering them as credible outlets, and the format is becoming a go-to for urgent news and significant statements from public figures.

Tech executives, in particular, are reaching out to accommodating podcast hosts to share their narratives instead of dealing with traditional media channels, as reported by Bloomberg in 2024.

This elevates the importance of podcasts if one aims to stay updated with the news.

Beykpour mentions that Particle employs embedding models to determine when podcasts are related to specific news topics. These models are supplied by the same firms that create LLM models, though they do not pertain to generative AI technologies, she clarifies.

“We utilize vector embeddings to ascertain these segments of podcasts relate to varied stories,” notes Beykpour. “A single podcast may touch on 10 or 20 stories, so we leverage AI for understanding that. Additionally, we employ AI for some aspects of the clipping process, including determining when to commence and conclude a clip.”

Image Credits:Particle

The organization utilizes technology from ElevenLabs for transcription. However, some of the technology that dictates where to clip the audio is integral to Particle’s proprietary methods.

The initiative to utilize podcasts for gaining insights into news-related commentary is also being examined closely by newsrooms today. As reported by Nieman Lab this month, The New York Times has been employing a tailored AI tool that uses LLMs to transcribe and summarize fresh episodes of various right-leaning and conservative podcasts to better grasp what influencers from that sphere are expressing about current events.

Particle’s Podcast Clips function is not exclusively linked to news articles. Since the app recognizes various entities — including individuals, locations, or objects — users can visit the page of notable persons, like OpenAI CEO Sam Altman, to review all his podcast appearances presented in a feed format.

Image Credits:Particle

Particle has also been working on additional features. The company has initiated its first monetization effort with Particle+, a voluntary subscription priced at $2.99/month (or $29.99/year) that grants access to premium functionalities. These include the option to use natural language to obtain news summaries in a preferred style; select from various voices during the personalized audio feed; “Listen to the News”; unlimited crossword puzzles; the ability to ask private questions to its AI chatbot; and more.

Image Credits:Particle

The Android version also introduces several other noteworthy changes. The browsing tab now features timely topics, such as the 2026 Winter Olympics, in addition to standard sections like politics, technology, or entertainment. Furthermore, when clicking on an entity, users will observe a fresh page presenting definitions, stories, articles, related entities, and associated topics.

Image Credits:Particle

Particle has not disclosed information regarding user engagement stats or conversion rates, but Beykpour did mention the app’s global audience prior to the Android launch. Weekly, 55% of Particle’s users are located outside the U.S., with India (15%) being its largest market following the U.S.

Spotify introduces AI-enhanced Prompted Playlists in the UK and additional markets

Spotify introduces AI-enhanced Prompted Playlists in the UK and additional markets

Following a successful test of its AI-driven “Prompted Playlist” feature in New Zealand and a recent launch in the U.S. and Canada, Spotify revealed on Monday that it will be introducing this tool to Premium users in the U.K., Ireland, Australia, and Sweden.

The Prompted Playlist feature enables users to generate personalized playlists by simply articulating their musical preferences in their own words. Rather than searching for specific songs or artists, users can outline the mood, circumstance, or inspiration they seek, while Spotify handles the rest.

To utilize the feature, users click “Create” and then choose “Prompted Playlist,” followed by entering any prompt in English. This feature is crafted to understand themes such as moods, aesthetics, and even experiences. Prompts can range from very general to highly specific, mentioning musical periods, genres, activities, lyrics, instruments, or even requesting a playlist themed around a TV series, film, or personal achievements. Users can also indicate whether they prefer the playlist to contain mainly new tracks or only selections from their library in the prompt.

After a prompt is put forward, Spotify’s AI curates a personalized playlist that fits the request. The system relies on the user’s listening history and blends it with contemporary music and cultural trends. In addition, each track includes a brief description explaining why it was chosen for that specific playlist.

Users can enhance their playlists by modifying their prompts or starting anew. For individuals whose musical preferences are ever-changing, playlists can be set to refresh automatically on a daily or weekly schedule.

Image Credits:Spotify

As this feature is still in beta, Spotify indicated that adjustments may occur as the company processes feedback, and that currently there are usage restrictions. Some users have mentioned reaching limits after approximately 20 or 30 prompts.

Spotify has recently broadened its AI functionalities across its platform, including Page Match, which enables users to scan a physical book page to access the respective section in the audiobook, and About the Song. The platform also revised its song lyrics feature to include worldwide translations and offline availability. Last week, SeatGeek collaborated with Spotify to facilitate listeners in finding ticket links for concerts on an artist’s page or upcoming tour dates within the app. 

Techcrunch event

Boston, MA
|
June 9, 2026

Internally, the company has integrated AI into its operations, with co-CEO Gustav Söderström stating earlier this month that Spotify’s top developers haven’t needed to write any code since December, thanks to AI.

Spotify is also enhancing its audiobook segment by moving into sales of physical books. Soon, users in the U.S. and U.K. will have the option to purchase physical editions directly through the app.

The US Saw a Notable Increase in Battery Demand Last Year

The US Saw a Notable Increase in Battery Demand Last Year

In 2025, the United States saw a historic rise in energy storage, as detailed in a freshly released solar industry report on Monday. This boom in battery storage signifies a considerable milestone for clean energy amid the renewable-unfriendly second term of the Trump administration and suggests that utilities may be modifying electric grids to meet increasing demand across the country.

The report, issued by the Solar Energy Industries Association (SEIA), corresponds with recent findings from Bloomberg New Energy Finance, showcasing a comparable surge in battery development. As per SEIA, the United States added 57 gigawatt hours of new energy storage in 2025, representing nearly a 30 percent rise from the prior year. This capacity is enough to supply power to over five million homes each year.

The report predicts a 21 percent market growth by the conclusion of this year, with an additional 70 gigawatt hours anticipated in 2026. These statistics sharply contrast with less than a decade ago when storage on the grid amounted to only around half a gigawatt.

Batteries have demonstrated considerable political endurance. Tax incentives for wind and solar were cut last summer amid legislative challenges to renewables, yet faced opposition from Republican lawmakers in regions with clean energy initiatives. Nonetheless, battery tax credits largely remained unharmed.

In spite of the federal administration’s position on renewables, batteries and solar experienced considerable advancement in certain conservative states last year. Texas stands out, where solar energy constituted over 15 percent of summer demand, exceeding coal for the first time. SEIA projects that Texas will outpace California this year in terms of gigawatt hours of storage deployed.

Jigar Shah, from the advisory firm Multiplier and a former director of the Department of Energy’s Loan Programs Office, points out that Texas’s independent and deregulated power grid favors solar and batteries over alternative solutions, notwithstanding White House opposition. Recent surveys reveal that MAGA voters back solar power, and prominent individuals like Katie Miller have voiced support for solar energy.

“Texas fundamentally ignores cultural prejudices,” Shah states. “‘Heed the market signals. Construct whatever you wish, whether it’s coal plants or batteries.’ Batteries received the most financial backing.”

While solar and batteries thrive in Texas, the majority of battery installations last year were standalone, not linked to particular solar projects, which is a positive trend for grids coping with heightened demand.

Generally, US energy grids utilize only about 50 percent of their available energy on a daily basis. This intentional underutilization guarantees capacity for peak demand days. Deploying batteries at all levels of the grid aids in harnessing surplus energy during off-peak hours to mitigate waste.