OpenAI’s Sora was the most eerie application on your device — now it’s closing down

OpenAI’s Sora was the most eerie application on your device — now it’s closing down

On Tuesday, OpenAI declared that it will be discontinuing Sora, a TikTok-like social application that debuted six months prior. No explanations were offered for the closure, nor was there any information provided on when it will be formally terminated.

Upon its launch as an invite-only social platform, Sora appeared to generate considerable demand for invitations. However, similar to Meta’s Horizon Worlds — a troubled virtual reality social platform that was once pivotal to the company’s notorious metaverse — Sora failed to maintain lasting appeal. The Sora 2 video and audio generation model is impressively advanced, yet interest in a solely AI-driven social feed was fleeting.

Sora was designed to operate as an AI-centric version of TikTok, replicating the familiar vertical video interface. Its key feature, “cameos,” enabled users to scan their faces and produce highly realistic deepfakes of themselves. These “cameos” could be public, allowing anyone to craft videos featuring them. (Cameo took legal action against OpenAI over the feature’s name and won, compelling the company to rename it to “characters.”)

In an outcome that surprised absolutely no one, this overhyped deepfake app turned out to be quite bizarre.

At its inception, Sora resembled an underregulated maze of unsettling Sam Altman videos. I was forever altered after viewing a lifelike clone of the OpenAI CEO meandering through a slaughterhouse of fattened pigs and inquiring, “Are my piggies enjoying their slop?”

[embedded content]

Sora was not intended to permit users to create videos of public individuals who had not explicitly opted in, yet bypassing OpenAI’s regulations proved to be remarkably simple. Soon enough, deepfakes of actual figures like civil rights leader Martin Luther King, Jr. and actor Robin Williams surfaced, prompting both of their daughters to take to Instagram, urging users to halt the creation of videos featuring their late fathers.

After producing numerous videos in which Sam Altman pilfered Nvidia chips from a Target, users changed tactics. Instead, they purposely generated content with copyrighted characters, courting legal issues for the individual they loved to deepfake — we witnessed Mario using cannabis, Naruto purchasing Krabby Patties, and Pikachu engaging in ASMR.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

This didn’t go as planned. Instead of suing, Disney, known for its litigious nature, invested $1 billion in OpenAI along with a licensing agreement that would have allowed Sora to produce videos with characters from Disney, Marvel, Pixar, and Star Wars.

This appeared to be a pivotal moment for the AI sector. Yet, with Sora’s shutdown, the arrangement is no more — although notably, it seems that no actual financial exchange occurred before its dissolution. (Disney offered some courteous comments regarding the situation on Tuesday, stating to the Hollywood Reporter that it will “continue to engage with AI platforms” moving forward.)

The initial excitement surrounding Sora was palpable. According to data from the mobile analytics firm Appfigures, the app reached approximately 3,332,200 downloads in November across the iOS App Store and Google Play. Had the app maintained its growth, OpenAI might have continued its operation, but that was not the case. By February, downloads plummeted to 1,128,700. This figure may seem substantial, but it pales in comparison to the 900 million weekly active users of ChatGPT.

During its existence, Appfigures estimates that Sora generated roughly $2.1 million from in-app purchases, which allowed users to acquire additional video generation credits. It’s difficult to believe that the computing demands of the Sora app had a significant impact on a company that is already incurring major losses, yet the app may have been too much of a risk to retain if it wasn’t experiencing growth.

When OpenAI launched the Sora app, I braced myself for a reality where we could easily create deepfakes of one another. Although I seldom create TikToks, I felt compelled to post a public service announcement that this alarming technology was rapidly approaching. It ultimately amassed over 300,000 views, which is unusual for my typically inactive TikTok account, but this announcement elicited a genuine reaction from people. I never anticipated that it would only endure for six months.

However, the disappearance of Sora doesn’t signify the end of the threat. The Sora 2 model remains accessible — it’s just secured behind the ChatGPT paywall. Moreover, OpenAI is far from the only entity making this technology widely available. It’s just a matter of time before another social AI video application enters the market, inundating us with another wave of clips featuring Snow White storming the Capitol.

Accel and Prosus select six ‘off-the-radar’ startups for their first India cohort

Accel and Prosus select six ‘off-the-radar’ startups for their first India cohort

Accel and Prosus have chosen six startups for their inaugural joint cohort in India, supporting what they call “off-the-map” concepts — companies tackling issues where markets remain undefined and progress is challenging to quantify.

The first cohort encompasses healthcare, climate, space, and longevity, showcasing a commitment to science-driven themes with extended development periods and unpredictable commercial trajectories. The six startups were chosen from over 2,000 submissions.

Here are the selected startups:

  • Praan is working on air infrastructure systems to enhance indoor air quality through purification, sensing, and automated controls. The Mumbai-based startup has previously secured investments from backers including Social Impact Capital, Aera VC, and Avaana Capital, along with strategic investors and family offices.
  • QOSMIC is creating optical communication systems for data transmission between satellites and Earth. The Bengaluru-based startup focuses on boosting bandwidth and minimizing latency in space-focused networks.
  • Ethereal Exploration Guild, also referred to as EtherealX, is designing reusable orbital launch vehicles to decrease the cost of space access. The Bengaluru-based startup recently raised a $20.5 million Series A round led by TDK Ventures and BIG Capital at an $80.5 million valuation.
  • Dognosis is developing a method to detect various cancers through breath analysis, utilizing dogs’ olfactory abilities alongside robotics and AI. Its product, BreatheEasy, entails patients exhaling into a mask, with the sample subsequently examined in a lab to identify cancer-related markers.
  • Ferra is developing a home-based strength-training system aimed at assisting individuals in maintaining mobility as they age. The system automatically adjusts resistance to correspond with the user’s performance.
  • A sixth startup, currently in stealth mode, is focused on creating brain-computer interfaces that facilitate direct communication between the human brain and external systems.

Launched in October, the initiative aims to support startups that venture beyond the conventional playbook of the industry, rather than those that are easiest to fund, according to the firms.

Under the program, Accel and Prosus are co-investing in each startup, with Prosus matching Accel’s investment, and funding amounts ranging between $500,000 to $2 million. The companies are employing a structure that minimizes early dilution for founders, with a portion of the capital deferred, allowing for equity surrender at a later stage.

The firms assert that the model is tailored for startups with protracted development cycles. “More than funding, they require time to achieve those breakthroughs,” stated Pratik Agarwal (depicted above, left), partner at Accel.

These firms frequently pursue a non-linear trajectory, as noted by Ashutosh Sharma (depicted above, right), head of India ecosystem at Prosus. He remarked that progress hinges on attaining crucial technical milestones rather than consistent growth.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Kentucky woman declines $26M proposal to convert her farm into a data center

Kentucky woman declines $26M proposal to convert her farm into a data center

For many years, Ida Huddleston and her family have maintained a farm in northern Kentucky, turning down at least one multimillion-dollar proposal to keep it intact.

According to a recent WKRC report, a “significant artificial intelligence corporation” proposed $26 million for a portion of their farm to establish a planned data center. Huddleston and her family rejected the offer, expressing their disinterest in having a data center constructed nearby or on any of their 1,200 acres of farmland located outside Maysville, Kentucky.

“They label us as foolish old farmers, but that’s not who we are,” Huddleston, at 82 years old, stated to Local 12 WKRC. “We recognize when our food sources are vanishing, our lands are vanishing, and we lack sufficient water — and that poison. Well, we’re aware of what we’ve been encountering,” seemingly referencing recent reports of water shortages and soil contamination near data center sites.

[embedded content]

In an interview with the news outlet, Huddleston expressed her skepticism that the data center would result in job creation or economic development for Mason County. “It’s a fraud,” she claimed.

The unnamed company, as noted by WKRC, has amended its plans and submitted a zoning application to rezone over 2,000 acres in northern Kentucky, suggesting that the AI corporation may still proceed with plans to construct its data center adjacent to Huddleston’s property.

Anthropic provides Claude Code with increased control, yet maintains tight restrictions.

Anthropic provides Claude Code with increased control, yet maintains tight restrictions.

For developers leveraging AI, “vibe coding” at present involves closely monitoring every action or risking the model operating uncontrolled. Anthropic states that its latest enhancement to Claude seeks to remove that dilemma by enabling the AI to determine which actions are safe to undertake independently — within certain constraints.  

This initiative signifies a wider transition across the sector, as AI tools are progressively crafted to function without awaiting human consent. The challenge involves finding a balance between speed and oversight: too many restrictions can hinder progress, while too few might render systems dangerous and unpredictable. Anthropic’s new “auto mode,” currently in research preview — indicating it is available for experimentation but not yet a finalized offering — represents its most recent effort to navigate this balance. 

Auto mode employs AI-driven safeguards to evaluate each action prior to execution, assessing for any risky behavior that the user did not authorize and for indications of prompt injection — a method of attack where harmful instructions are concealed within the content that the AI is processing, leading it to execute unintended actions. Any actions deemed safe will proceed automatically, while the risky ones will be blocked.

It essentially expands upon Claude Code’s existing “dangerously-skip-permissions” command, which delegates all decision-making to the AI, but incorporates an additional safety layer.

This feature builds upon a trend of autonomous coding solutions from firms like GitHub and OpenAI, which can carry out tasks on behalf of developers. However, it advances this concept by transferring the decision-making of when to seek permission from the user to the AI itself. 

Anthropic has yet to disclose the precise criteria utilized by its safety layer to differentiate safe actions from risky ones — an aspect that developers will likely wish to grasp more thoroughly before broadly implementing the feature. (TechCrunch has reached out to the company for more insights on this matter.)

Auto mode follows Anthropic’s introduction of Claude Code Review, its automatic code reviewer designed to identify bugs before they affect the codebase, and Dispatch for Cowork, which empowers users to delegate tasks to AI agents for work management on their behalf.  

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Auto mode will be available to Enterprise and API users shortly. The company mentions that it currently functions only with Claude Sonnet 4.6 and Opus 4.6, and advises using the new feature in “isolated environments” — sandboxed setups that are separated from production systems, minimizing potential harm if something goes awry.

Spotify trials a new feature to prevent AI-generated content from being credited to actual artists

Spotify trials a new feature to prevent AI-generated content from being credited to actual artists

In an era where AI-generated content is saturating music streaming services, Spotify is testing a new “Artist Profile Protection” feature in beta that enables artists to vet their releases prior to them being published on their profiles. The purpose of this tool is to provide artists with enhanced authority over the tracks that are linked to their names on the platform.

“Tracks have been appearing on incorrect artist pages across streaming platforms, and the surge of easily produced AI tracks has exacerbated the issue,” Spotify mentioned in a blog entry. “That’s not the kind of experience we aim to provide artists on Spotify, which is why we’ve prioritized safeguarding artist identity for 2026. Today, we are unveiling a pioneering solution to a longstanding issue in streaming.”

Artists participating in the beta are granted the power to review and either sanction or reject releases sent to Spotify. Only those releases that receive their approval will be featured on their artist profile, contribute to their statistics, and appear in user recommendations.

Spotify’s disclosure comes just one week after Sony Music announced it has demanded the removal of over 135,000 AI-generated songs falsely impersonating its artists on streaming platforms.

Image Credits:Spotify

Spotify indicates that although open distribution has made music releases easier for independent artists, it also opens doors for errors and imposter activity. Tracks may mistakenly appear on an incorrect artist’s profile because of metadata issues, confusion with similarly named artists, or intentional attempts to misassociate music with a specific artist.

“When this occurs, it can affect your catalog, your statistics, your Release Radar, and how audiences discover your music,” Spotify clarifies. “We understand how annoying this situation can be for both artists and their fans, and one of the primary requests we’ve received from artists over the past year is for enhanced visibility before music is attached to their name.”

Spotify emphasizes that while the new capability may not be essential for every artist, it is crafted for those who have faced frequent incorrect releases, share a common artist name, or desire more influence over what is showcased on their profile.

Included artists in the beta will find the feature in their “Spotify for Artists” settings on both desktop and mobile web. By activating the “Artist Profile Protection”, they will receive email notifications when music associated with their name is sent to Spotify. Subsequently, they can choose to approve or refuse the request.