New Mexico has delivered Meta its initial legal loss regarding child safety, with the rest of the nation observing.

New Mexico has delivered Meta its initial legal loss regarding child safety, with the rest of the nation observing.

On Tuesday, a jury in Santa Fe mandated that Meta pay $375 million in civil fines after determining the company misrepresented the safety of its platforms and put children at risk.

The office of New Mexico attorney general Raúl Torrez described the ruling as a “pivotal moment for every parent worried about what may occur to their children when they are online,” based on a press release issued immediately after the verdict.

The ruling, which came after a trial lasting six weeks, found Meta culpable on both counts presented by the state under its Unfair Practices Act. At $5,000 for each violation — the highest permissible by law — the penalty might appear minor for a corporation valued at $1.5 trillion by public market investors. However, the monetary amount is less significant than the fact that this is the first jury verdict of its kind against Meta concerning harm to minors.

“Meta leaders were aware that their products harmed children, ignored alerts from their own staff, and misled the public about their knowledge,” Torrez stated after the verdict. “Today, the jury sided with families, educators, and child safety advocates in declaring that enough is enough.”

The lawsuit from New Mexico against the company stemmed from a 2023 undercover probe in which state investigators created fake accounts on Facebook and Instagram pretending to be users under 14 years old. These accounts received sexually explicit content and were solicited for sex by several men from New Mexico who were arrested in May 2024, with two captured in a motel where they believed they would meet a 12-year-old girl, as indicated by their interactions with the accounts.

The operation was fundamental to the state’s argument. The evidence it generated — in conjunction with internal Meta documents and testimonies from ex-employees — indicated that staff members and external child safety experts consistently expressed concerns about hazards on the platforms and were largely disregarded.

Some of the most damaging evidence emerged from individuals who worked within the company.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Arturo Béjar, who served as an engineering and product leader at Meta for six years starting in 2009, recounted to the court (after testifying before the Senate years ago) about his attempts to alert Meta officials after his own 14-year-old daughter encountered unwanted sexual proposals on Instagram. He also testified that the same personalized algorithms that make Meta’s platforms effective at targeting advertisements could also be beneficial to predators.

“The product excels at connecting individuals with similar interests,” Béjar stated, “and if your interest is young girls, it will effectively connect you with young girls.” 

Brian Boland, a former vice president of partnerships product marketing at Meta who dedicated nearly twelve years to the company, testified that when he departed in 2020, he “absolutely did not perceive safety as a priority” to CEO Mark Zuckerberg and then-COO Sheryl Sandberg.

Zuckerberg was deposed as part of the lawsuit, and a recording of that deposition, which occurred a year ago but was presented to jurors earlier this month, provided some of the trial’s more notable moments. Zuckerberg labeled research on whether the platforms are addictive as “inconclusive,” a remark that the state contested, pointing out that Meta’s researchers identified that several features were intentionally designed to trigger dopamine responses and prolong time spent on the apps. 

When questioned if, as a parent, he had the right to be informed whether a product his own child utilized was addictive, Zuckerberg noted that there was a lot to “unpack in that.” He then mentioned that he and his spouse personally assess whether products are “appropriate for use” before allowing their children to use them and that they “also monitor how they’re utilized.” He indicated that his children are “younger.”

Predictably, Meta announced plans to appeal. “We respectfully disagree with the verdict,” a spokesperson conveyed to media sources, asserting that the company “strives to ensure safety” on its platforms. 

The New Mexico lawsuit is far from Meta’s only legal trouble. Meta and YouTube are also featured in an ongoing trial in Los Angeles concerning allegations that their platforms are addictive and have caused harm to young users. 

A verdict in that second case may arrive soon. A jury is currently deliberating, in a case initiated by a plaintiff known only as K.G.M., a 20-year-old woman from California who asserts she became addicted to social media during childhood, resulting in anxiety, depression, and body-image issues. (TikTok and Snap were also included as defendants but settled prior to trial.) 

On Monday, the judge presiding over the Los Angeles case instructed jurors to continue deliberating after the panel indicated it was struggling to reach a verdict on one of the defendants — suggesting the potential for at least a partial retrial. 

Simultaneously, a second phase of the New Mexico case — a bench trial (meaning there is no jury) on public nuisance claims set to commence on May 4 — could lead to additional penalties, alongside court-ordered adjustments to Meta’s platforms, including age verification measures and enhanced protections for minors. 

Instead of asserting that Meta violated a specific consumer protection law, the state argues that the company’s platforms have broadly harmed the health and safety of residents in New Mexico.

Lululemon wagers that Epoch Biodesign can surpass its own limits, quite literally.

Lululemon wagers that Epoch Biodesign can surpass its own limits, quite literally.

As the world transitions to electricity, the oil and gas sector is relying on plastics to enhance future profits. However, Jacob Nathan is determined to change that narrative.

Nathan began exploring methods to decompose plastics during his high school years. Now, as the founder and CEO of Epoch Biodesign, he has developed a technique that utilizes an array of enzymes to “convert this synthetic waste” into a form suitable for creating more plastic, he mentioned to TechCrunch.

“For us, a bale of textile is the same as a barrel of oil,” Nathan stated, indicating that discarded fabric, rather than crude oil, serves as the foundational material for Epoch’s operations. Unlike oil, the cost of this raw material is not influenced by the unpredictable decisions of global leaders.

Epoch’s strategy focuses on decomposing both pre-consumer and post-consumer plastic waste into monomers — the essential components used to create plastic. The company depends on enzymes, the cellular machinery at a molecular level, to achieve this. However, due to the unpredictable nature of biology, Epoch employs only the enzymes and avoids using the microorganisms that produce them. To obtain these compounds, Epoch collaborates with industrial suppliers that already produce enzymes in large quantities.

Through a series of enzyme treatments, Epoch is able to reclaim over 90% of the target monomers. “The only remnants from our process are dyes, which are collected and can be handled separately,” Nathan explained.

The method is initially focused on nylon 6,6, a durable synthetic material prevalent in products ranging from apparel to airbags, carpets, and climbing ropes. 

“It’s the initial synthetic fiber. It’s what was developed by the innovators at DuPont. Its effectiveness ensures its continued use across multiple applications,” Nathan stated.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

The timing is ideal, Nathan noted. “Recently, the prices of precursors for nylon 6,6 and other materials have surged as much as 150% based on spot pricing,” Nathan remarked. By utilizing waste textiles instead of petroleum, Epoch can completely avoid such fluctuations. “When we separate the material production from the processes of extraction, refinement, and the unpredictability tied to fossil carbon, we can establish far greater consistency.”

This proposal has struck a chord with investors, including the apparel powerhouse Lululemon, which generates significant amounts of clothing derived from plastics. Lululemon recently took part in a $12 million funding round that also featured Exantia, Happiness Capital, Kompas VC, and Leitmotif.

The fundraising will support the establishment of a demonstration-scale facility close to Imperial College London; the company aims to follow this with a commercial-scale facility expected to commence operations in 2028, which should have the capability to generate 20,000 metric tons annually of monomer.

Once fully operational, Nathan mentioned that Epoch may expand its efforts to recycle additional plastics. “The technology can be adapted for various materials and types of plastics,” he said. “Nylon 6,6 will achieve maturity first, but we have some thrilling developments in the pipeline.”

OpenAI’s Sora was the most eerie application on your device — now it’s closing down

OpenAI’s Sora was the most eerie application on your device — now it’s closing down

On Tuesday, OpenAI declared that it will be discontinuing Sora, a TikTok-like social application that debuted six months prior. No explanations were offered for the closure, nor was there any information provided on when it will be formally terminated.

Upon its launch as an invite-only social platform, Sora appeared to generate considerable demand for invitations. However, similar to Meta’s Horizon Worlds — a troubled virtual reality social platform that was once pivotal to the company’s notorious metaverse — Sora failed to maintain lasting appeal. The Sora 2 video and audio generation model is impressively advanced, yet interest in a solely AI-driven social feed was fleeting.

Sora was designed to operate as an AI-centric version of TikTok, replicating the familiar vertical video interface. Its key feature, “cameos,” enabled users to scan their faces and produce highly realistic deepfakes of themselves. These “cameos” could be public, allowing anyone to craft videos featuring them. (Cameo took legal action against OpenAI over the feature’s name and won, compelling the company to rename it to “characters.”)

In an outcome that surprised absolutely no one, this overhyped deepfake app turned out to be quite bizarre.

At its inception, Sora resembled an underregulated maze of unsettling Sam Altman videos. I was forever altered after viewing a lifelike clone of the OpenAI CEO meandering through a slaughterhouse of fattened pigs and inquiring, “Are my piggies enjoying their slop?”

[embedded content]

Sora was not intended to permit users to create videos of public individuals who had not explicitly opted in, yet bypassing OpenAI’s regulations proved to be remarkably simple. Soon enough, deepfakes of actual figures like civil rights leader Martin Luther King, Jr. and actor Robin Williams surfaced, prompting both of their daughters to take to Instagram, urging users to halt the creation of videos featuring their late fathers.

After producing numerous videos in which Sam Altman pilfered Nvidia chips from a Target, users changed tactics. Instead, they purposely generated content with copyrighted characters, courting legal issues for the individual they loved to deepfake — we witnessed Mario using cannabis, Naruto purchasing Krabby Patties, and Pikachu engaging in ASMR.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

This didn’t go as planned. Instead of suing, Disney, known for its litigious nature, invested $1 billion in OpenAI along with a licensing agreement that would have allowed Sora to produce videos with characters from Disney, Marvel, Pixar, and Star Wars.

This appeared to be a pivotal moment for the AI sector. Yet, with Sora’s shutdown, the arrangement is no more — although notably, it seems that no actual financial exchange occurred before its dissolution. (Disney offered some courteous comments regarding the situation on Tuesday, stating to the Hollywood Reporter that it will “continue to engage with AI platforms” moving forward.)

The initial excitement surrounding Sora was palpable. According to data from the mobile analytics firm Appfigures, the app reached approximately 3,332,200 downloads in November across the iOS App Store and Google Play. Had the app maintained its growth, OpenAI might have continued its operation, but that was not the case. By February, downloads plummeted to 1,128,700. This figure may seem substantial, but it pales in comparison to the 900 million weekly active users of ChatGPT.

During its existence, Appfigures estimates that Sora generated roughly $2.1 million from in-app purchases, which allowed users to acquire additional video generation credits. It’s difficult to believe that the computing demands of the Sora app had a significant impact on a company that is already incurring major losses, yet the app may have been too much of a risk to retain if it wasn’t experiencing growth.

When OpenAI launched the Sora app, I braced myself for a reality where we could easily create deepfakes of one another. Although I seldom create TikToks, I felt compelled to post a public service announcement that this alarming technology was rapidly approaching. It ultimately amassed over 300,000 views, which is unusual for my typically inactive TikTok account, but this announcement elicited a genuine reaction from people. I never anticipated that it would only endure for six months.

However, the disappearance of Sora doesn’t signify the end of the threat. The Sora 2 model remains accessible — it’s just secured behind the ChatGPT paywall. Moreover, OpenAI is far from the only entity making this technology widely available. It’s just a matter of time before another social AI video application enters the market, inundating us with another wave of clips featuring Snow White storming the Capitol.

Accel and Prosus select six ‘off-the-radar’ startups for their first India cohort

Accel and Prosus select six ‘off-the-radar’ startups for their first India cohort

Accel and Prosus have chosen six startups for their inaugural joint cohort in India, supporting what they call “off-the-map” concepts — companies tackling issues where markets remain undefined and progress is challenging to quantify.

The first cohort encompasses healthcare, climate, space, and longevity, showcasing a commitment to science-driven themes with extended development periods and unpredictable commercial trajectories. The six startups were chosen from over 2,000 submissions.

Here are the selected startups:

  • Praan is working on air infrastructure systems to enhance indoor air quality through purification, sensing, and automated controls. The Mumbai-based startup has previously secured investments from backers including Social Impact Capital, Aera VC, and Avaana Capital, along with strategic investors and family offices.
  • QOSMIC is creating optical communication systems for data transmission between satellites and Earth. The Bengaluru-based startup focuses on boosting bandwidth and minimizing latency in space-focused networks.
  • Ethereal Exploration Guild, also referred to as EtherealX, is designing reusable orbital launch vehicles to decrease the cost of space access. The Bengaluru-based startup recently raised a $20.5 million Series A round led by TDK Ventures and BIG Capital at an $80.5 million valuation.
  • Dognosis is developing a method to detect various cancers through breath analysis, utilizing dogs’ olfactory abilities alongside robotics and AI. Its product, BreatheEasy, entails patients exhaling into a mask, with the sample subsequently examined in a lab to identify cancer-related markers.
  • Ferra is developing a home-based strength-training system aimed at assisting individuals in maintaining mobility as they age. The system automatically adjusts resistance to correspond with the user’s performance.
  • A sixth startup, currently in stealth mode, is focused on creating brain-computer interfaces that facilitate direct communication between the human brain and external systems.

Launched in October, the initiative aims to support startups that venture beyond the conventional playbook of the industry, rather than those that are easiest to fund, according to the firms.

Under the program, Accel and Prosus are co-investing in each startup, with Prosus matching Accel’s investment, and funding amounts ranging between $500,000 to $2 million. The companies are employing a structure that minimizes early dilution for founders, with a portion of the capital deferred, allowing for equity surrender at a later stage.

The firms assert that the model is tailored for startups with protracted development cycles. “More than funding, they require time to achieve those breakthroughs,” stated Pratik Agarwal (depicted above, left), partner at Accel.

These firms frequently pursue a non-linear trajectory, as noted by Ashutosh Sharma (depicted above, right), head of India ecosystem at Prosus. He remarked that progress hinges on attaining crucial technical milestones rather than consistent growth.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Kentucky woman declines $26M proposal to convert her farm into a data center

Kentucky woman declines $26M proposal to convert her farm into a data center

For many years, Ida Huddleston and her family have maintained a farm in northern Kentucky, turning down at least one multimillion-dollar proposal to keep it intact.

According to a recent WKRC report, a “significant artificial intelligence corporation” proposed $26 million for a portion of their farm to establish a planned data center. Huddleston and her family rejected the offer, expressing their disinterest in having a data center constructed nearby or on any of their 1,200 acres of farmland located outside Maysville, Kentucky.

“They label us as foolish old farmers, but that’s not who we are,” Huddleston, at 82 years old, stated to Local 12 WKRC. “We recognize when our food sources are vanishing, our lands are vanishing, and we lack sufficient water — and that poison. Well, we’re aware of what we’ve been encountering,” seemingly referencing recent reports of water shortages and soil contamination near data center sites.

[embedded content]

In an interview with the news outlet, Huddleston expressed her skepticism that the data center would result in job creation or economic development for Mason County. “It’s a fraud,” she claimed.

The unnamed company, as noted by WKRC, has amended its plans and submitted a zoning application to rezone over 2,000 acres in northern Kentucky, suggesting that the AI corporation may still proceed with plans to construct its data center adjacent to Huddleston’s property.

Anthropic provides Claude Code with increased control, yet maintains tight restrictions.

Anthropic provides Claude Code with increased control, yet maintains tight restrictions.

For developers leveraging AI, “vibe coding” at present involves closely monitoring every action or risking the model operating uncontrolled. Anthropic states that its latest enhancement to Claude seeks to remove that dilemma by enabling the AI to determine which actions are safe to undertake independently — within certain constraints.  

This initiative signifies a wider transition across the sector, as AI tools are progressively crafted to function without awaiting human consent. The challenge involves finding a balance between speed and oversight: too many restrictions can hinder progress, while too few might render systems dangerous and unpredictable. Anthropic’s new “auto mode,” currently in research preview — indicating it is available for experimentation but not yet a finalized offering — represents its most recent effort to navigate this balance. 

Auto mode employs AI-driven safeguards to evaluate each action prior to execution, assessing for any risky behavior that the user did not authorize and for indications of prompt injection — a method of attack where harmful instructions are concealed within the content that the AI is processing, leading it to execute unintended actions. Any actions deemed safe will proceed automatically, while the risky ones will be blocked.

It essentially expands upon Claude Code’s existing “dangerously-skip-permissions” command, which delegates all decision-making to the AI, but incorporates an additional safety layer.

This feature builds upon a trend of autonomous coding solutions from firms like GitHub and OpenAI, which can carry out tasks on behalf of developers. However, it advances this concept by transferring the decision-making of when to seek permission from the user to the AI itself. 

Anthropic has yet to disclose the precise criteria utilized by its safety layer to differentiate safe actions from risky ones — an aspect that developers will likely wish to grasp more thoroughly before broadly implementing the feature. (TechCrunch has reached out to the company for more insights on this matter.)

Auto mode follows Anthropic’s introduction of Claude Code Review, its automatic code reviewer designed to identify bugs before they affect the codebase, and Dispatch for Cowork, which empowers users to delegate tasks to AI agents for work management on their behalf.  

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Auto mode will be available to Enterprise and API users shortly. The company mentions that it currently functions only with Claude Sonnet 4.6 and Opus 4.6, and advises using the new feature in “isolated environments” — sandboxed setups that are separated from production systems, minimizing potential harm if something goes awry.

Spotify trials a new feature to prevent AI-generated content from being credited to actual artists

Spotify trials a new feature to prevent AI-generated content from being credited to actual artists

In an era where AI-generated content is saturating music streaming services, Spotify is testing a new “Artist Profile Protection” feature in beta that enables artists to vet their releases prior to them being published on their profiles. The purpose of this tool is to provide artists with enhanced authority over the tracks that are linked to their names on the platform.

“Tracks have been appearing on incorrect artist pages across streaming platforms, and the surge of easily produced AI tracks has exacerbated the issue,” Spotify mentioned in a blog entry. “That’s not the kind of experience we aim to provide artists on Spotify, which is why we’ve prioritized safeguarding artist identity for 2026. Today, we are unveiling a pioneering solution to a longstanding issue in streaming.”

Artists participating in the beta are granted the power to review and either sanction or reject releases sent to Spotify. Only those releases that receive their approval will be featured on their artist profile, contribute to their statistics, and appear in user recommendations.

Spotify’s disclosure comes just one week after Sony Music announced it has demanded the removal of over 135,000 AI-generated songs falsely impersonating its artists on streaming platforms.

Image Credits:Spotify

Spotify indicates that although open distribution has made music releases easier for independent artists, it also opens doors for errors and imposter activity. Tracks may mistakenly appear on an incorrect artist’s profile because of metadata issues, confusion with similarly named artists, or intentional attempts to misassociate music with a specific artist.

“When this occurs, it can affect your catalog, your statistics, your Release Radar, and how audiences discover your music,” Spotify clarifies. “We understand how annoying this situation can be for both artists and their fans, and one of the primary requests we’ve received from artists over the past year is for enhanced visibility before music is attached to their name.”

Spotify emphasizes that while the new capability may not be essential for every artist, it is crafted for those who have faced frequent incorrect releases, share a common artist name, or desire more influence over what is showcased on their profile.

Included artists in the beta will find the feature in their “Spotify for Artists” settings on both desktop and mobile web. By activating the “Artist Profile Protection”, they will receive email notifications when music associated with their name is sent to Spotify. Subsequently, they can choose to approve or refuse the request.

Eco Experiment: Assessment of Clear Drop Soft Plastic Compactor

Eco Experiment: Assessment of Clear Drop Soft Plastic Compactor

Soft plastics pose a significant challenge for sorting machines, interfere with processing lines, and harm the environment. They are typically excluded from curbside recycling initiatives. While there are facilities that recycle these types of plastics, achieving clean, contaminant-free waste is challenging, often leading to the majority of soft plastics ending up in landfills. The SPC, a “pre-recycling device” developed by Arbouzov, aims to facilitate this process by providing contained and traceable plastic that has a higher likelihood of being recycled.

I was curious about converting these blocks into products like patio furniture, which became evident when Arbouzov shared a video from a facility in Frankfort, Indiana, dedicated to processing these plastics. The blocks are shredded into small pieces, then pressed into decking, chairs, and more.

“The timeframe from sending a block to its arrival in recycling takes several weeks,” Arbouzov mentioned. At present, Frankfort is the sole processing site, but Arbouzov intends to move processing operations nearer to material generation to lessen logistics dependency, utilizing the postal system as a temporary solution.

Recycling, Rewired

My family of three generated a block every two weeks, exceeding the supply of mailers, leading to a buildup of blocks. I hoped the SPC would create consumer-ready products like spoons or 3D printing filament, but a 2023 Greenpeace report pointed out that recycling plastics could heighten their toxicity, as the heating process might release or create dangerous chemicals. I wondered if recycled plastic fits into a circular economy and sought Arbouzov’s insight.

Meet the ex-Apple designer creating a fresh AI interface at Hark.

Meet the ex-Apple designer creating a fresh AI interface at Hark.

An enigmatic AI laboratory initiated by serial entrepreneur Brett Adcock revealed fresh insights regarding what it considers a groundbreaking fusion of model development and hardware creation that aims to transform human interaction with intelligent software.

The enterprise announced in a statement its intention to create multi-modal end-to-end models, alongside their hardware and interfaces concurrently, to provide a “flawless end-to-end personal intelligence solution.” This system will possess an enduring memory of your existence and will be capable of listening, observing, and engaging with the environment instantaneously.

The execution of this plan remains ambiguous outside the company, but Hark’s aspirations reflect Silicon Valley’s persistent quest for a game-changing application that can make AI a sought-after consumer commodity, rather than awkwardly integrated features into established digital platforms.

“My stance is straightforward: current AI models are far from sufficiently intelligent; they seem rather foolish, and the devices we utilize to access them are inherently pre-AI,” Adcock penned in a January internal memo shared with TechCrunch. “We’re progressing towards a realm that resembles sci-fi figures like Jarvis or Her, equipped with systems that foresee, adjust, and genuinely care for the individuals employing them.”

Details are deliberately limited, yet Hark highlights Director of Design Abidur Chowdhury as a crucial recruitment. A former industrial designer at Apple credited with spearheading the design team for the iPhone Air and other recent models, Chowdhury, who hails from London, departed last autumn after discussions with Adcock and embracing his vision for revolutionizing how humans automate their daily lives.

In an exclusive conversation with TechCrunch, Chowdhury repeatedly declined to divulge specifics about Hark’s future plans, merely stating that the public can expect an initial release of the company’s AI models this summer. In responding to inquiries about diverse methods of coexisting with AI, the designer provided a few hints. 

“What became very evident to me during that period is that the world is undoubtedly evolving, yet we’re still utilizing the same devices… everything’s been structured around these pre-existing platforms,” Chowdhury remarked. “Few individuals are genuinely pursuing what the future holds. There’s a vast potential for us to explore if intelligence formed the foundation of everything we interacted with, rather than manifesting as an application or website at that top layer.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Chowdhury highlights the awkwardness associated with routine activities such as filling out forms, transferring data between devices, or performing mundane tasks like booking travel or organizing home renovations.

“Those are entire evenings where I need to strategize… the anxiety of, you know, I spend my workday mulling this over in the back of my mind, oh, I need to get this done,” Chowdhury expressed. “We firmly believe that all of the minor tasks that accumulate into significant challenges today can be streamlined out of our lives.”

Chowdhury asserts that the company is aware of what it is crafting, but cannot currently disclose how users will engage with it. His remarks imply that wearables, like Meta’s Glasses, seem improbable.

“I’m not a strong advocate for many of the wearable AI technologies currently being discussed,” Chowdhury stated. “I don’t believe it’s suitable to introduce a barrier between humanity and the interfaces we utilize in the world. I feel similarly uneasy about pins or similar gadgets that incorporate cameras.”

When generative AI first emerged, Chowdhury initially regarded it as a fleeting moment, yet subsequent iterations of models convinced him that it would alter his profession. Hark, the term, signifies paying attention, which Chowdhury believes offers a reflective context for the company’s objective.

“Conventional user experiences typically focus on determining the simplest solution for everyone,” he told TechCrunch. “The forthcoming user experience will revolve around identifying the right solution for each individual. And I’m confident that this can be achieved. However, it demands considerable effort.”

The emphasis on elegance and simplicity for users resonates with the pinnacle achievements of Apple’s product design, naturally evoking thoughts of Jony Ive, the renowned former Apple designer who is currently advancing AI-native hardware at OpenAI. A comparison Hark’s representative opted not to delve into. 

Another notable comparison stems from how Elon Musk’s xAI initiatives on advanced models align with Tesla’s efforts in autonomous vehicles and humanoid robots.

There exists a similar corporate synergy between Adcock’s humanoid robotics firm Figure and the new AI laboratories. Hark’s models are already being trained using Figure’s robots, although the precise purpose remains uncertain. An insider familiar with the companies’ objectives states that there is no plan to merge them.

Hark employs 45 engineers and designers, including former Meta AI researchers and designers from Apple and Tesla, all collaborating on the same campus that accommodates Adcock’s other ventures. Hark anticipates deploying a new cluster of thousands of NVIDIA GPUs in April.

Now Hark, supported by $100 million in personal seed funding from Adcock, will enter the race for talent as the world’s leading corporations seek to determine the framework that integrates deep learning models into everyday life — particularly as frustration with existing digital life models reaches a boiling point.

“It genuinely feels like there’s a chance for improvement, and I haven’t felt that way since the iPhone was introduced,” Chowdhury remarked.

Epic Games lays off 1,000 employees, announces decline in Fortnite engagement

Epic Games lays off 1,000 employees, announces decline in Fortnite engagement

Epic Games is terminating 1,000 employees on Tuesday, as stated in a company memo shared on Epic’s blog.

“The decline in Fortnite engagement that began in 2025 means we’re incurring significantly higher expenses than our revenues, and we must implement substantial cuts to sustain the company,” Epic Games CEO Tim Sweeney mentioned in the memo. “This reduction in workforce, along with over $500 million in identified savings from contracting, marketing, and eliminating certain open positions, will place us in a more secure position.”

Last week, Epic also raised the price of V-Bucks, the currency used in Fortnite, citing that “operating costs for Fortnite have increased significantly.”

Sweeney indicated that the layoffs were not a result of AI making developers’ roles obsolete. However, the company may be experiencing AI-related challenges indirectly, as the RAM shortage and chip demand have created ripple effects throughout the industry, also affecting consumer spending.

Employees who are laid off will receive four months of severance pay, with additional compensation for those with longer employment durations. Epic will also cover healthcare expenses for U.S. employees for six months following the layoffs.