
Meta faced a setback in a lawsuit against the state of New Mexico last week, signifying the first occasion the enterprise has been deemed accountable by the judicial system for jeopardizing child safety. This ruling was significant in its own right — but the following day, Meta encountered another defeat when a jury in Los Angeles concluded that the corporation deliberately crafted its applications to be addictive for minors and adolescents, thereby compromising the mental health of the plaintiff, a 20-year-old identified as K.G.M.
These decisions pave the way for an influx of lawsuits regarding Meta’s deliberate targeting of teenage users, despite its awareness that its applications can adversely affect adolescents’ mental well-being. A multitude of cases akin to K.G.M.’s are in progress, while 40 state attorneys general have initiated lawsuits against Meta that resemble New Mexico’s action.
Although social media companies are generally protected by law from liability for user-generated content, this instance focused not on the material shared on these platforms but rather on the design features themselves, such as infinite scrolling and constant notifications.
“They utilized the model that was leveraged against the tobacco industry many years ago, concentrating not on the content but on these addictive attributes — how the platform is constructed, and concerns regarding the design, which differs from content, where you encounter this First Amendment debate,” said Allison Fitzpatrick, a digital media attorney and partner at Davis+Gilbert, to TechCrunch. “It appeared to be, at least in these two instances, an effective argument.”
Following a six-week trial, the jury in the New Mexico case determined Meta guilty of breaching the state’s Unfair Practices Act, mandating the firm to pay the highest penalty of $5,000 per infringement, aggregating to a $375 million fine. The Los Angeles trial, which held Meta 70% responsible and YouTube 30% accountable for the distress experienced by plaintiff K.G.M., will impose a collective fine of $6 million on the entities involved. (Snap and TikTok reached a settlement prior to trial.)
“That amount is trivial for the Metas of the world,” Fitzpatrick remarked. “However, when you multiply that $6 million by the multitude of cases against them, it results in a staggering figure.”
“We respectfully contest these rulings and will seek to appeal,” a Meta representative informed TechCrunch. “Simplifying a complex issue like adolescent mental health to a single cause risks neglecting the many broader challenges facing youths today and disregards the reality that numerous teens depend on digital communities for connection and belonging.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
During the litigation process, newly unveiled internal documents from Meta highlighted a trend of inaction regarding the acknowledged adverse effects of its platforms on minors, as well as a focused effort to increase the amount of time teens spent on its apps, even during school hours or through “finstas,” which are “fake Instagram” accounts created by teens specifically to avoid detection by parents or teachers.
One document exhibited findings from a 2019 study, in which Meta conducted 24 in-person, individual interviews with individuals whose use of the product had been designated as problematic — a classification affecting an estimated 12.5% of users.
“The most reliable external research suggests that Facebook’s influence on users’ well-being is negative,” the report indicates.
Several documents referenced remarks made by Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri regarding their focus on engaging with teenage users. Zuckerberg even remarked that for Facebook Live to be successful among teens, his “guess is we’ll need to be very good at not notifying parents/teachers.”
In other documents, Meta staff casually discussed the company’s objectives for enhancing teen user retention.
“We discovered that one of the things we must optimize for is peeking at your phone in the middle of Chemistry :),” one employee penned in an email to Meta CPO Chris Cox.
“No one wakes up intending to maximize the number of times they check Instagram that day,” Meta VP of Product Max Eulenstein wrote in an internal email in January 2021. “Yet, that’s precisely what our product teams are striving to achieve.”
A Meta spokesperson shared with TechCrunch that many of the newly disclosed documents date back nearly a decade, but the company is attentive to feedback from parents, experts, and law enforcement on how the platform can be enhanced.
“We do not set goals related to teen time spent today,” the spokesperson stated, referencing Instagram Teen Accounts, introduced in 2024, which incorporate built-in safety measures for young users. These safeguards comprise defaulting accounts to private and permitting only those they follow to tag or mention them in posts. Instagram will also send reminders after 60 minutes of usage encouraging teens to exit the app, an adjustment that can only be made for users under 16 with parental consent.
For Kelly Stonelake, a Director of Product Marketing at Meta, who was part of the company from 2009 to 2024, these developments are not surprising. (Stonelake is currently litigating against Meta for claimed gender-based discrimination and harassment.)
“The substantial amount of unsealed evidence truly illustrates what I experienced firsthand,” she expressed to TechCrunch.
At Meta, Stonelake spearheaded “go-to-market” strategies for the VR social application Horizon Worlds as it launched for teenagers. She alleges that she raised alarms over ineffective content moderation tools in the metaverse, but her concerns were dismissed.
The U.S. government has demonstrated significant interest in the topic of online safety for children, particularly following the release of damaging internal documents by Meta whistleblower Frances Haugen in 2021, revealing that Meta was aware of Instagram’s detrimental effects on adolescent girls.
While Congress has proposed several bills aimed at enhancing children’s online safety, many privacy advocates argue that these initiatives may do more to surveil adults and restrict speech than to protect minors.
“There is no scenario where enacting censorship or ‘age verification’ legislation, under the pretense of ensuring children’s safety, won’t lead to widespread online censorship of content and speech that is deemed undesirable,” stated Fight for the Future director Evan Greer.
Stonelake previously lobbied on Capitol Hill for the Kids Online Safety Act, which has gained the most traction among these legislative efforts, receiving backing from corporations such as Microsoft, Snap, X, and Apple. However, as the bill has progressed and evolved, she has grown increasingly critical of it.
“I am advocating for a ‘no’ vote on the current iteration,” she remarked, pointing to the bill’s preemption clauses that would override state regulations concerning tech firms. “There is verbiage in the latest draft that would obstruct access to the courts for school districts, grieving families, and states — and that’s outrageous.”
This language could potentially impede the very case brought forth by New Mexico against Meta.
“We need stakeholders to engage in discussions around solutions, rather than what’s happening now, which is merely telling a different narrative to both sides of the political aisle to incite them and instill fear,” Stonelake asserted. “The real solution will require complexity and nuance and will need to consider multiple priorities.”

