The jury has said Instagram and YouTube are flawed, so what’s next?


Is social media not only bad, but… Illegally bad? Should tech companies pay to make it this way? According to two American juries – and with no shortage of outside commentary – the answer to both questions is “yes.”

Earlier this week, a jury was seated – One in New Mexico, One in Los Angeles He held Meta responsible for a total of hundreds of millions of dollars for harming minors. YouTube was also found liable in Los Angeles, and both companies appealed their losses. On the one hand, the decisions were surprising. Meta and Google operate speech-carrying platforms that are typically protected in several ways under Section 230 and the First Amendment; It is unusual for suits to overcome these obstacles. Elsewhere, it seems inevitable. The Internet in 2026 has become synonymous with a few widely unpopular for-profit platforms, and the damage they have caused is… Often concrete – But it is still uncertain what this defeat will change, and what the collateral damage could be.

If these decisions survive appeal — which is not certain — the immediate result will be millions of dollars in penalties. Depending on the result Several other cases of “major”. In Los Angeles, a much larger collective settlement is accessible on the road. Even at this early stage, it’s a victory for the legal theory that says social media platforms should be treated as defective products — a strategy designed to circumvent the shield of Section 230, but which often fails in court. “The California case in particular is the first time social media has ever had to face stare down and judgment by a jury for specific personal injuries,” said attorney Carrie Goldberg, who has pushed forward major early social media liability lawsuits, including Failed case against GrindrHe said Edge. “It is the dawn of a new era.”

“It is the dawn of a new era.”

For many activists, the overall goal is to make clear that lawsuits will continue to pile up if companies don’t change their business practices. What practices? In New Mexico, a jury was swayed by arguments that Meta made misleading statements to users about the safety of its platforms. In Los Angeles, plaintiffs successfully claimed that Instagram and YouTube were designed in a way to facilitate a social media addiction that harmed a teenage user. Meta and Google (and other nervous companies) could change specific features or be more careful in their public statements and disclosures. But each case depends on a very specific set of circumstances, and there is no one-size-fits-all answer about what to change.

Eric Goldman, a legal blogger and Section 230 expert, sees a clear legal risk facing social media services. “These rulings suggest that juries are willing to impose significant liability on social media providers based on allegations of social media addiction.” Goldman wrote After the ruling. In an email to EdgeHe pointed out that the case is bigger than just a jury. “The justices are certainly aware of the controversy surrounding social media,” Goldman said. In the Los Angeles case and other upcoming groundbreaking trials, “judges did not give defendants on social media much of the benefit of the doubt, which is how the plaintiffs’ new cases were able to get to trial in the first place.” This situation “looks different than it did a decade ago,” he says.

Goldman pointed this out New York and ca It also passed laws banning “addictive” social media feeds for teens — so even if an appeals court overturns the recent decisions, it wouldn’t necessarily turn back the clock.

The best outcome of all this has been put forward by people like Julie Angwin, Who wrote in New York Times That companies should be pushed to change “toxic” features such as infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize “shocking and crude” content. The worst case scenario falls along the lines of a piece of Mike Masnick and I was upsetwhich argued that the provisions spell disaster for smaller social media networks that could be sued for allowing users to post and see First Amendment-protected speech under a vague tort standard. He noted that the New Mexico case hinges in part on the argument that Meta harmed children by providing end-to-end encryption in private messages, creating an incentive to discontinue a feature that protects users’ privacy — and, indeed, He stopped dead End-to-end encryption on Instagram earlier this month.

“Judges did not give social media defendants much benefit of the doubt.”

Blake Reed, a professor at Colorado School of Law, is more cautious. “It’s difficult to predict what will happen now,” Reed said. Edge In an interview. On blueskyHe noted that companies are likely to look for “cold, calculated” ways to avoid legal liability with as little disruption as possible, rather than fundamentally rethink their business models. “There are clearly harms here and it is very important that the tort system records those harms” in recent cases, he said. Edge. “It’s just that what comes in its wake is less clear to me.”

While Reed sees legal risks for smaller, less-resourced platforms in these decisions, he’s not convinced they’re any more serious than the challenges new entrants already face in a highly consolidated online landscape built on massive amounts of data collection. “There are things that make it difficult to do something really new in this space, which is driven by the type of market and the politics surrounding it,” he said.

Reid, Goldman, and Masnick warn that there is a distinct possibility that the repercussions could harm marginalized people who use social media to communicate. “There will be stronger pressures to restrict or ban children from social media,” Goldman said. Edge. “This harms many subpopulations of minors, from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations.”

If platforms like Instagram are inherently harmful and can be directly compared to gambling or cigarettes, comparisons that critics often make, then being kicked out would not be a great loss. but Even research suggests Social media can be harmful to teens, and moderate use has been linked to improved health. Conversely, harmful online content e.g harassment and Eating disorder communities It still thrives before modern, recommendation-driven, highly optimized social media; Tinkering with specific algorithm variants can have a positive impact, but will likely not provide a deep or permanent solution. The appeal of punishing the meta is clear, but what it would mean for everyone else is much less clear.

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *