‘Toy Story 5’ targets unsettling AI toys: ‘I’m constantly hearing’

‘Toy Story 5’ targets unsettling AI toys: ‘I’m constantly hearing’

When the inaugural Toy Story film was released in 1995, Google was not yet in existence and Apple was nearing financial disaster. No one could foresee that more than 30 years later, Pixar would continue to produce Toy Story films, nor could anyone have anticipated that the newest addition to the series would feature Buzz Lightyear and a balding Woody facing off against a malevolent AI tablet named Lilypad.

Indeed, “Toy Story 5” pits classic toys like Mrs. Potato Head, Rex, and Slinky Dog against the looming menace of technology.

The preview reveals Bonnie, the young girl who received Andy’s toys when he departed for college in “Toy Story 2,” playing outdoors with her toys when an unexpected package containing the Lilypad tablet is delivered to her. She becomes utterly captivated by the tablet, failing to even glance away from the screen when her parents inform her that her screen time has ended.

In the “Toy Story 5” trailer, the Lilypad — or, Lily — is depicted as a malevolent antagonist. When Jessie questions the tablet regarding Bonnie’s health, Lily appears to ignore her, prompting the cowgirl to insist that the tablet pay attention.

[embedded content]

“I’m always listening,” Lily ominously states, mimicking Jessie’s passionate speech with a robotic voice… and subsequently translates it into Spanish.

“Technology has invaded our home,” Jessie tells Woody. “I’m losing Bonnie to this gadget.”

Woody responds, “Toys are for enjoyment, but tech is for everything else.”

Techcrunch event

Boston, MA
|
June 9, 2026

Will “Toy Story 5” tug at the emotions of young viewers and encourage them to reconsider the implications of excessive screen time? That could be unlikely. However, at the very least, it provides them with content that isn’t as dull as Cocomelon.

Meta's virtual universe departs from virtual reality

Meta’s virtual universe departs from virtual reality

On Thursday, Meta revealed a significant update for its immersive virtual environment, Horizon Worlds, indicating a departure from the metaverse concept. The tech behemoth announced that the focus for Horizon Worlds will shift to being “almost exclusively mobile” and that it is “explicitly separating” its Quest VR platform from the virtual environment.

Since 2020, Meta’s Reality Labs unit for VR and smart glasses innovation has incurred near losses of $80 billion. The adjustments to Horizon Worlds, along with other recent actions, suggest that Meta is thoroughly reassessing its VR goals.

Recently, the company reportedly laid off about 1,500 staff from its Reality Labs division — approximately 10% of the department’s workforce — and closed multiple VR gaming studios. Furthermore, it was reported that the VR fitness application Supernatural, which Meta acquired in 2023, will cease producing new content and will transition into “maintenance mode.”

Initially launched in 2021 as a VR platform, Horizon Worlds was later extended to web and mobile platforms. On Thursday, Meta stated that to “truly change the game and access a considerably larger market, we’re fully committing to mobile.”

By prioritizing mobile, Horizon Worlds aims to position itself in competition with well-known platforms like Roblox and Fortnite.

“We’re in an excellent position to provide synchronous social gaming experiences at scale, thanks to our unique capability to link these games with billions of individuals on the world’s largest social networks,” Samantha Ryan, VP of content at Reality Labs, remarked in the blog entry. “You began to see this strategy take shape in 2025, and now, it’s our primary focus.”

Ryan also emphasized that Meta continues to prioritize VR hardware.

Techcrunch event

Boston, MA
|
June 9, 2026

“We possess a strong roadmap for upcoming VR headsets designed for various audience segments as the market evolves and matures,” Ryan noted.

Meta has effectively forsaken its metaverse aspirations in favor of AI. After redirecting its Reality Labs investments from the metaverse, Meta is now concentrating on creating AI wearables and enhancing its proprietary AI models.

In last month’s earnings call, Meta CEO Mark Zuckerberg remarked, “It’s difficult to envision a world in several years where the majority of glasses that individuals wear aren’t AI glasses.”

The executive also mentioned that sales of Meta’s glasses have tripled over the past year, dubbing them “some of the fastest-growing consumer electronics in history.”

Lucid Motors reduces its workforce by 12% in its pursuit of profitability.

Lucid Motors reduces its workforce by 12% in its pursuit of profitability.

Lucid Motors plans to reduce its workforce by 12% to “enhance operational efficiency and allocate our resources effectively as we progress towards profitability,” based on an internal memo acquired by TechCrunch.

The memo states that hourly workers in manufacturing, logistics, and quality assurance are exempt from these layoffs. While the exact number of layoffs remains uncertain, it is expected to be in the hundreds. As of the end of 2024, Lucid Motors reported a global full-time employee count of 6,800.

“Parting ways with colleagues is always challenging,” interim CEO Marc Winterhoff mentioned in the memo. “We appreciate the efforts of those affected by these decisions, and we are offering severance packages, bonuses, ongoing health benefits, and transition assistance to support them during this time.” The company did not offer an immediate response to a comment request.

These layoffs occur as the firm is actively increasing its production and distribution of the Gravity SUV. Following initial production and quality challenges with the Gravity, Lucid Motors has regained momentum and successfully doubled its output for 2024 compared to the previous year.

Additionally, the company is set to unveil a more budget-friendly mid-size electric vehicle later this year, projected to retail around $50,000. It is also working with Uber and Nuro, a self-driving vehicle firm, to launch a robotaxi service in the San Francisco region this year. Lucid Motors will disclose its financial results for 2025 next week.

“Crucially, today’s measures do not alter our strategy,” Winterhoff detailed in the memo. “Our fundamental priorities remain intact, and our attention is still on commencing production of our Midsize platform. With careful execution, we also aim for further entry into the robotaxi market, continuing advancements in ADAS and software, and boosting sales of Lucid Gravity and Air in both current and new regions.”

Lucid Motors has nearly completed a year without a permanent CEO. Peter Rawlinson, who served as chief executive and chief technology officer, unexpectedly resigned on February 25, 2025. Since his departure, Lucid Motors has experienced considerable changes within its executive team, including the exit of the chief engineer, who filed a lawsuit against the company in December for wrongful dismissal and discrimination. (The claims have been described by Lucid Motors as “absurd.”)

Techcrunch event

Boston, MA
|
June 9, 2026

AI’s pledge to independent filmmakers: Quicker, less expensive, more solitary

AI’s pledge to independent filmmakers: Quicker, less expensive, more solitary

In rural Hawai’i, a Filipino man treads through the garden of his formative years, his steps rustling the grass. The chorus of chirping birds adds to the tropical symphony as he nears a shrine situated at the foot of a starfruit tree. He stoops to examine a black-and-white image of a woman, her hair styled in a side part typical of the 1950s. 

Abruptly, a strong breeze rattles the branches of the tree, sending the shrine’s items tumbling. The man retreats, stumbles over a root, and strikes his head. Upon regaining consciousness, he finds himself in a shadowy, fog-laden forest, with a woman donning a clay mask looming over him, wielding a sword. 

“Who dares to slumber beneath the sacred tree?” she queries in Ilocano, a language prevalent in Hawaii’s Filipino community, while the sword hovers at his throat. He admits to feeling disoriented and makes an attempt to escape. She pursues him, alternating between sprinting and gliding through the air. He falls once more. She presses on, sword raised high. In an act of desperation, he throws a stone, shattering her clay mask and unveiling half of her face. 

“Mom?” he inquires. 

This marks the beginning of “Murmuray,” a short film crafted by independent director Brad Tangonan. Every aspect of this film resonated with his previous creations, from the richly tactile nature scenes to the dreamlike, softly muted highlights. 

The singular distinction? He produced it using AI. 

Tangonan was among ten filmmakers selected for Google Flow Sessions, a five-week initiative that provided artists with access to Google’s suite of AI tools to create short films, including Gemini, image generator Nano Banana Pro, and film generator Veo.   

Techcrunch event

Boston, MA
|
June 9, 2026

Each film showcased a distinct perspective. Hal Watmough’s “You’ve Been Here Before” mixed hyperrealistic, lifelike imagery with playful cartoon elements to whimsically delve into the significance of a morning routine, whereas Tabitha Swanson’s “The Antidote to Fear is Curiosity” presented a more abstract, philosophical dialogue regarding the interplay between AI and our identities. 

None of the short films, showcased at Soho House New York late last year, felt like mere AI creations. Every independent filmmaker I interviewed stated that in these instances, AI empowered them to narrate stories they wouldn’t have been able to share due to budgetary or temporal constraints. 

[embedded content]

“I perceive all these tools, whether it’s a camera or generative AI, as instruments for an artist to convey their vision,” Tangonan shared with me after the screenings. 

This viewpoint that AI is merely another instrument for creators is evidently the message that Google is keen to promote. Google is correct; as video generation technologies advance, AI will increasingly integrate into a creator’s toolkit. 

By 2025, firms like Google, Runway, OpenAI, Kling, Luma AI, and Higgsfield had evolved significantly beyond the uncanny, prompt-driven novelties of the preceding year. The AI video sector, backed by billions in venture capital, is transitioning from prototype phase to post-production.

This age of AI proliferation that promises to “democratize access” to the film industry simultaneously threatens to diminish jobs and creativity, smothering them beneath an avalanche of low-quality work. The existential repercussions have pitted creatives against one another. Those who embrace AI might be regarded as complicit; those who abstain face the risk of obsolescence. 

The dilemma isn’t whether these tools should be part of the toolkit — they are arriving, whether welcomed or not. The critical inquiry should be: What type of filmmaking will endure when the industry prioritizes speed and volume over quality? And what transpires when individual creators wield these tools to craft works of genuine significance?

But is it slop?

Filmmaker Keenan MacWilliam utilized AI to animate scanned depictions of plants and fish in her short film “Mimesis”Image Credits:Keenan MacWilliam

Numerous criticisms against AI in filmmaking have surfaced — even from some of the industry’s most renowned figures. 

Filmmaker Guillermo del Toro stated last October that he would prefer death over employing generative AI in filmmaking. James Cameron expressed in a recent CBS interview that the notion of generating actors and emotions through prompts is “terrifying,” suggesting that generative AI merely regurgitates a composite average of all that humanity has previously created. 

Werner Herzog remarked that the films he has observed generated by AI “lack soul.” He noted: “The common denominator, and nothing beyond this common denominator, can be identified in these creations.”

Cameron and Herzog contend that AI is seizing creative control from humans and cannot possibly depict their personal lived experiences. 

“It’s simple to harbor anger toward AI as an abstract concept, but it’s more challenging to resent an individual who has crafted something intimate,” Watmough commented to TechCrunch. 

Tangonan, who categorizes “Murmuray” as a “family narrative,” aligns with that viewpoint. 

“AI acts as a facilitator,” Tangonan expressed. “I’m the one making all the creative choices. When viewers come across ‘AI slop’ online, it’s often just the lowest common denominator material. And, yes, if you relinquish control to AI, that’s what you’ll receive. But if you retain your unique voice, perspective, and style, you’ll end up with something distinctive.” 

Utilizing AI in filmmaking goes beyond merely prompting a film into existence. For instance, Tangonan wrote the script for “Murmuray” independently and compiled visual references for his shot list. He then input that material into Nano Banana Pro to create images that aligned with his aesthetic, serving as a basis for video production.  

Filmmaker Keenan MacWilliam also took care to ensure her short film “Mimesis,” a fictional guided meditation, was a “true extension of [her] visual style, rather than a ‘blender’ of other creators’ works.”

MacWilliam scripted and recorded her own narration for the meditation, which was both relaxing and amusing. On-screen, against a dark, watery background, psychedelic visuals of flowers and plants merged, transformed into smoke, morphed into seahorses, and swam away.

All visuals were sourced from MacWilliam’s personal collection of scanned flora and fauna — she takes her scanner wherever she goes. 

“I dedicated significant time to mastering apps that utilized my own dataset, which I then referenced,” MacWilliam informed TechCrunch, noting that she collaborated with her long-time composer and sound designer on the project. “I opted to avoid using AI for anything that I could have filmed or could have asked my collaborators to animate. My objective was to unveil new forms of expression for my established themes and style, not to replace the roles of those I enjoy working with.”

This desire to leverage AI only when collaboration with other humans was unfeasible or when the peculiar nature of AI output complemented the story was a recurring theme among the filmmakers I spoke to at the Google Flow event.

For instance, Sander van Bellegem’s “Melongray” delved into life’s acceleration through mesmerizing visualizations. In one scene, a salamander transforms into a balloon. Although not part of his original script, he was inspired by AI’s capability to push the boundaries of imagination and physics. 

To be [efficient] or not to be?

[embedded content]

Contemporary film studio budgets are being strained by escalating filming expenses, the transition to streaming, and corporate consolidation marked by risk aversion. Consequently, substantial investments are reserved for safe revenue streams (consider: yet another Marvel movie), while original mid-budget films have nearly vanished. 

Integrating AI into the equation risks intensifying studios’ scarcity mindset to the extent that they may attempt to eliminate anything that can be — actors, sets, lighting — disregarding art and quality. However, the efficiencies afforded by AI could potentially lower barriers, making it feasible for film studios to create original works. 

Cameron himself acknowledged in his CBS interview that generative AI could reduce the costs of visual effects, potentially paving the way for more imaginative science fiction and fantasy films — ventures that are currently reserved for established intellectual properties like “Avatar.”

The scene in “Murmuray” featuring the woman soaring through the forest would have necessitated costly visual effects or complex rigging on set, both of which were beyond the budget of a short film, according to Tangonan. 

Nevertheless, even filmmakers who recognize the advantages of efficiency comprehend the potential threats to artistic expression. 

“In general, I believe that efficiency does not foster creativity,” MacWilliam remarked.  

Empowered and isolated

Hal Watmough’s short film “You’ve Been Here Before” humorously examines the significance of morning routinesImage Credits:Hal Watmough

For independent filmmakers, having access to such potent tools is both a blessing and a curse. It does “democratize access,” indeed, but it also results in solitary work. The more one can accomplish independently, the lesser the incentive to collaborate. 

“I recognize that I’m a one-man show, and I’ve created all this on my own…but that should never represent how anyone narrates a story or produces a film,” Watmough mentioned to TechCrunch, acknowledging that a friend who is an actor lent his voice for his short. “It ought to be a collaborative endeavor because greater involvement leads to broader accessibility and deeper connection with the audience.”

Directors make creative choices, but they don’t make all of them. The filmmakers I consulted found themselves unexpectedly taking on roles such as set designer, lighting director, and costumer — responsibilities requiring skills they lacked. This was exhausting and distracting, diverting them from the work they genuinely cared about. It was disconcerting to consider how an entire creative ecosystem could be so rapidly disrupted. 

The filmmakers I spoke to voiced their preference not to substitute actors with AI, although some acknowledged that AI-generated performers seem inevitable for smaller studios. The technology for generating actors, their emotions, and movements already exists and continues to improve. AI video startups like Luma AI, which raised a staggering $900 million Series C last November, are developing technologies that allow for an actor’s performance to be captured once, only to use AI to alter the character, attire, and setting. 

“Ideally, I would collaborate with live actors, cinematographers, department heads, and the full crew to create something extraordinary, utilizing AI to complement our efforts where on-set limitations arise, whether due to budget constraints or time,” Tangonan stated. 

If artists don’t define AI, studios will

[embedded content]

“Creating any artistic work that incorporates new technology necessitates a level of introspection and a readiness to engage in dialogue surrounding the work,” MacWilliam remarked.

“These are tools,” she added. “How will you wield the tool? Will you maintain ethical standards? Will you raise pertinent questions? Will you be open and share knowledge?”

However, many individuals do not perceive AI tools as neutral. Beyond labor replacement issues, copyright dilemmas persist. AI video generation startup Runway reportedly scraped thousands of hours of YouTube videos and copyrighted media, while entities like Google, OpenAI, and Luma AI have faced scrutiny for potentially using copyrighted films and stock footage without consent. (Some tools, like Moonvalley’s Marey, exclusively utilize openly licensed data.) Furthermore, the environmental implications are alarming — some estimates indicate that generating mere seconds of AI video could consume as much electricity as several hours of streaming. 

Unsurprisingly, numerous filmmakers I consulted mentioned facing stigma for exploring AI use. 

“Whenever I post content online, many of my filmmaking peers display an immediate, reflexive response advocating that we should all adhere to the principle of not using any of these tools,” Tangonan remarked. “I simply disagree with that.” 

If filmmakers shy away from discussing how AI can be employed ethically, the conversation may end up being dictated by those who prioritize efficiency over art, rather than by artists seeking responsible utilization.

“The film industry is struggling because innovation is lacking and costs are spiraling. We require tools like this for it to thrive,” Watmough asserted. “It’s crucial that individuals engage with AI because if we don’t, it will evolve into something unrecognizable, and that lack of sustainability is concerning.”

Correction: An earlier version of this article incorrectly identified Ilocano as a Hawaiian dialect of Filipino. Ilocano is a language originating from the northern Philippines and is commonly spoken amongst Filipino communities in Hawaii.

Watch Unitree’s G1 unleash a kung fu robot frenzy

Chinese robotics leader Unitree took full advantage of the nation’s Lunar New Year celebrations this week to show off the impressive skills of its G1 humanoid robot. A video (top) of the event shows numerous G1 robots participating in what Unitree described as “the world’s first fully autonomous humanoid kung fu performance.” There’s a spot […]

The post Watch Unitree’s G1 unleash a kung fu robot frenzy appeared first on Digital Trends.

OpenAI deepens India push with Pine Labs fintech partnership

OpenAI deepens India push with Pine Labs fintech partnership

As India pitches itself as a global hub for applied artificial intelligence, OpenAI has partnered with Pine Labs to integrate AI-driven reasoning into the fintech firm’s payments stack, automating settlement and invoicing workflows in a move the companies say could help accelerate AI-led commerce in India.

The partnership will see Pine Labs embed OpenAI’s application programming interfaces — software tools that let companies plug AI into their existing systems — within its payments and commerce infrastructure, the companies said on Thursday, all with the aim of enabling AI-assisted settlement, reconciliation, and invoicing workflows.

The deal underscores OpenAI’s broader push to expand its footprint in India, one of its fastest-growing markets, as it looks to move beyond being known primarily as the maker of ChatGPT and embed its technology into education, enterprise, and infrastructure. Earlier this week, OpenAI partnered with leading Indian engineering, medical, and design institutions to bring AI tools into higher education, betting that India’s large developer base and more than a billion internet users will play a central role in the next phase of AI adoption.

Pine Labs is already using AI internally to automate parts of its settlement and reconciliation process, cutting the time it takes to clear daily settlements from hours to minutes, according to Chief executive B Amrish Rau. The Noida-based company previously relied on manual checks by dozens of employees to process funds from multiple banks before markets opened each day, a workflow that is now largely handled by AI-driven systems, he said in an interview.

For Pine Labs, the partnership is intended to extend those AI-driven efficiencies beyond internal operations to merchants and corporate clients, starting with business-to-business use cases such as invoice processing, settlements and payments orchestration, Rau told TechCrunch. He noted the company sees faster adoption in B2B workflows, where AI agents can handle large volumes of repetitive financial tasks under predefined rules, before similar capabilities reach consumer-facing payments.

“People talk about retail AI, but the bigger impact of all of this is really efficiency improvement, especially in B2B,” Rau said. “If you look at invoicing and settlement, those are workflows where agents can actually drive the process end to end, and that’s where adoption can happen faster.”

The rollout of more autonomous, agent-led payment workflows will move faster in overseas markets where regulations already allow such transactions, Rau said, while India is likely to see a more gradual adoption focused on AI-assisted commerce rather than fully agent-initiated payments. He said that Pine Labs is already prototyping agent-driven payments in parts of the Middle East and Southeast Asia, even as Indian regulations require tighter controls on how payments are authorized.

Techcrunch event

Boston, MA
|
June 23, 2026

For OpenAI, the partnership offers a route deeper into India’s payments and enterprise ecosystem as it looks to move beyond consumer-facing tools and embed its models into high-volume, regulated workflows. Rau said the collaboration is aimed at increasing merchant stickiness and expanding Pine Labs’ role from a payments processor to a broader commerce platform, with higher transaction volumes over time translating into incremental revenue.

Pine Labs says it works with more than 980,000 merchants, 716 consumer brands, and 177 financial institutions, and has processed over 6 billion cumulative transactions valued at over ₹11.4 trillion (about $126 billion), per its prospectus published last year. The fintech operates across 20 countries, including Malaysia, Singapore, Australia, parts of Africa, the UAE, and the U.S., giving the OpenAI partnership reach across both Indian and international markets.

Rau said the partnership does not involve revenue sharing between the two companies, with Pine Labs not taking a cut if its merchants choose to embed OpenAI’s tools. “We’ve kept it completely independent of each other — anything related to payment and payment services, we will get the benefit of it, and anything related to OpenAI revenues will go to them,” he said.

The arrangement, Rau added, is also non-exclusive. He compared it to OpenAI’s partnership with Stripe in the U.S. and said Pine Labs remains open to working with other AI providers.

Rau said Pine Labs is building additional security and compliance layers around AI-driven workflows to ensure that sensitive merchant and consumer transaction data remains protected, as the company integrates AI more deeply into its payments systems. He said the focus is on ensuring transactions remain secure and compliant even as more workflows are automated by AI.

Pine Labs’ interest in AI-driven commerce builds on earlier work through its Setu unit, which has experimented with agent-led bill payment experiences using chatbots including ChatGPT and Anthropic’s Claude. Separately, India also began piloting consumer payments directly through AI chatbots last year.

The new announcement comes as India hosts its AI Impact Summit in New Delhi, where global AI companies including OpenAI, Anthropic, and Google are showcasing their latest capabilities alongside Indian startups demonstrating AI applications aimed at large-scale deployment across sectors such as finance, healthcare, and education.

Have money, will travel: a16z’s hunt for the next European unicorn

Have money, will travel: a16z’s hunt for the next European unicorn

Gabriel Vasquez, a partner at Andreessen Horowitz, recently revealed he took nine flights from NYC to Stockholm in one year. While his visits included stops at companies like Lovable — where he posted from its office — the trips were also about finding future Swedish unicorns before they cross the Atlantic.

This all came to light when news emerged that a16z had led a $2.3 million pre-seed round into Dentio, a Swedish startup that uses AI to help dentists’ practices with admin work. While this is a small check for a firm that just announced new funds totaling $15 billion, it confirms that U.S. VCs are actively seeking deal flow outside of the U.S., even without local offices.

Stockholm is a natural stop for a16z, which previously achieved significant returns from backing Skype, cofounded by Swedish entrepreneur Niklas Zennström. Since then, a significant number of fast-growing startups have been created in the Swedish capital, and the VC heavyweight tracked down where many of them were coming from. 

“We spend a lot of time developing a deep understanding of specific markets and knowing where innovation is emerging. In Sweden, that has meant closely tracking ecosystems like SSE Labs — the startup incubator of the Stockholm School of Economics — and the companies coming out of it,” Vasquez told TechCrunch.

Like fintech giant Klarna, legal AI startup Legora, and e-scooter company Voi, Dentio is an alum of SSE Labs — a startup incubator that has produced several successful Swedish companies. The three former high school classmates Elias Afrasiabi, Anton Li and Lukas Sjögren joined the incubator after reconnecting as students at both the SSE (Stockholm School of Economics) and KTH (Royal Institute of Technology), then joined the incubator with additional backing from KTH’s Innovation Launch program. They tackled a problem close to home: Li’s mom, a dentist, had told them how admin work detracted from clinical care.

The trio intuited that they could leverage LLMs to help people like her — an idea that they also validated with her and her colleagues. This led them to Dentio’s initial product, a recording tool that uses AI to generate clinical notes. But it’s only a matter of time before AI scribes become a commodity product, and Dentio needs to prove its value to dentists so they aren’t tempted to switch providers when that happens, Afrasiabi said.

Potential competitors include fellow Swedish startup Tandem Health, which raised a $50 million Series A round last year to support clinicians with AI across multiple medical specialties. Dentio, by contrast, focuses exclusively on dentists, but it believes it can still reach the scale VCs expect through international expansion

Techcrunch event

Boston, MA
|
June 23, 2026

“Now we’re a team of seven people, and we think that it’s possible to build a unified way of handling administration all over Europe, and maybe even all over the world,” Afrasiabi said. While Europe’s healthcare systems are fragmented, they share similarities, and Dentio’s assumption is that what works in Sweden could work elsewhere in the EU.

Dentio prominently features its “Made in Sweden” branding and emphasizes that “all relevant data is processed in Sweden and Finland in compliance with Swedish and EU law.” It signals data protection to privacy-conscious European customers. But it also signals potential to VCs — a callback to Sweden’s history of producing breakout companies.

“We went to zero meetups. I reached out to zero investors,” Afrasiabi said. While the team was heads down building, the word spread out. “I think it was mostly through referrals and people talking to each other that the news got all the way over to the U.S.,” he said.

This wasn’t happenstance: a16z has eyes around the world in order to spot these companies as early as local funds might, Vasquez said. “In Sweden for example, we partnered with top founders abroad like Fredrik Hjelm, founder of Voi, and Johannes Schildt, founder of Kry, by turning them into scouts and mapping the best local talent.”

For Vasquez, who focuses on AI application investments for a16z, this isn’t just about Sweden, but about “a pattern of great global companies being born abroad and scaling quickly,” from Black Forest Labs in Germany to Manus, the Singapore-based AI startup recently acquired by Meta.

Born and raised in El Salvador, he has also been spending time in São Paulo. “I’m really excited about what’s brewing in Brazil and across Latin America in AI,” he wrote on LinkedIn at the time. “I believe AI is the great equalizer,” he added. “Most people now have access to PhD-level intelligence on a phone, and ultimately, Silicon Valley is a state of mind.”

Correction: This story originally stated that a16z is an investor in Lovable owing to an editing error.

Blackstone backs Neysa in up to $1.2B financing as India pushes to build domestic AI infrastructure

Blackstone backs Neysa in up to $1.2B financing as India pushes to build domestic AI infrastructure

Neysa, an Indian AI infrastructure startup, has secured backing from U.S. private equity firm Blackstone as it scales domestic compute capacity amid India’s push to build homegrown AI capabilities.

Blackstone and co-investors, including Teachers’ Venture Growth, TVS Capital, 360 ONE Assets, and Nexus Venture Partners, have agreed to invest up to $600 million of primary equity in Neysa, giving Blackstone a majority stake, Blackstone and Neysa told TechCrunch. The Mumbai-headquartered startup also plans to raise an additional $600 million in debt financing as it expands GPU capacity, a sharp increase from the $50 million it had raised previously.

The deal comes as demand for AI computing surges globally, creating supply constraints for specialized chips and data center capacity needed to train and run large models. Newer AI-focused infrastructure providers — often referred to as “neo-clouds” — have emerged to bridge that gap by offering dedicated GPU capacity and faster deployment than traditional hyperscalers, particularly for enterprises and AI labs with specific regulatory, latency, or customisation requirements.

Neysa operates in this emerging segment, positioning itself as a provider of customized, GPU-first infrastructure for enterprises, government agencies, and AI developers in India, where demand for local compute is still at an early but rapidly expanding stage.

“A lot of customers want hand-holding, and a lot of them want round-the-clock support with a 15-minute response and a couple of our resolutions. And so those are the kinds of things that we provide that some of the hyperscalers don’t,” said Neysa co-founder and CEO Sharad Sanghi.

Nesya co-founder and CEO Sharad SanghiImage Credits:Neysa

Ganesh Mani, a senior managing director at Blackstone Private Equity, said his firm estimates that India currently has fewer than 60,000 GPUs deployed — and it expects the figure to scale up nearly 30 times to more than two million in the coming years.

That expansion is being driven by a combination of government demand, enterprises in regulated sectors such as financial services and healthcare that need to keep data local, and AI developers building models within India, Mani told TechCrunch. Global AI labs, many of which count India among their largest user bases, are also increasingly looking to deploy computing capacity closer to users to reduce latency and meet data requirements.

Techcrunch event

Boston, MA
|
June 23, 2026

The investment also builds on Blackstone’s broader push into data center and AI infrastructure globally. The firm has previously backed large-scale data centre platforms such as QTS and AirTrunk, as well as specialized AI infrastructure providers including CoreWeave in the U.S. and Firmus in Australia.

Neysa develops and operates GPU-based AI infrastructure that enables enterprises, researchers, and public sector clients to train, fine-tune, and deploy AI models locally. The startup currently has about 1,200 GPUs live and plans to sharply scale that capacity, targeting deployments of more than 20,000 GPUs over time as customer demand accelerates.

“We are seeing a demand that we are going to more than triple our capacity next year,” Sanghi said. “Some of the conversations we are having are at a fairly advanced stage; if they go through, then we could see it sooner rather than later. We could see in the next nine months.”

Sanghi told TechCrunch that the bulk of the new capital will be used to deploy large-scale GPU clusters, including compute, networking and storage, while a smaller portion will go toward research and development and building out Neysa’s software platforms for orchestration, observability, and security.

Neysa aims to more than triple its revenue next year as demand for AI workloads accelerates, with ambitions to expand beyond India over time, Sanghi said. Founded in 2023, the startup employs 110 people across offices in Mumbai, Bengaluru, and Chennai.