
Meta and Youtube Found Liable for Addictive Design. What it Means for Platforms.
On 26 March 2026, a Los Angeles jury found Meta and Google's YouTube liable for deliberately building addictive platforms that harmed the mental health of a young user, known as Kaley, who had been using Instagram from age nine and YouTube from age six. The jury awarded $6 million in damages — $3 million compensatory, $3 million punitive — on the basis that the companies had acted with malice, oppression, or fraud. The verdict came one day after a New Mexico jury found Meta liable for failing to protect child users from predatory contact on its platforms.
These are not isolated incidents. Thousands of similar cases are working their way through US courts. A further set of federal cases brought by states and school districts is scheduled for trial this summer. The legal theory that addictive product design constitutes personal injury — once untested — now has jury validation behind it.
What the Verdict Establishes
The legal argument in Kaley's case drew directly from the Big Tobacco playbook: that companies knowingly engineered products to be addictive, were aware of the harms those products caused, and prioritised growth over user safety. Internal documents and executive testimony showed that Meta and YouTube knew children were using their platforms and understood the risks posed by features including infinite scroll, algorithmic recommendations, and autoplay.
Critically, the case succeeded not on the basis of what users posted — which is largely protected under Section 230 of the Communications Decency Act — but on the basis of product design. This distinction matters. It means that platforms cannot rely on content liability shields to insulate themselves from claims about the systems and features they build.
The verdict is a bellwether. Its significance lies less in the $6 million figure — a fraction of Meta's quarterly revenue — and more in what it signals for the litigation pipeline ahead.
The Implication for Platforms
Until now, the commercial case for proactive risk assessment and safety-by-design has rested primarily on two pillars: avoiding regulatory fines, and avoiding reputational damage. Those remain valid. But this verdict introduces a third, and arguably more immediate, exposure: civil liability at scale.
Platforms that cannot demonstrate they assessed the risks posed by their design choices — and took proportionate action in response — are now materially more vulnerable to litigation. The question a jury or court will ask is not only whether harm occurred, but whether the platform knew harm was possible and what it did about it. Documented risk assessment and structured safety governance are the most credible answer to that question.
This is not a US-specific concern. The UK Online Safety Act already requires services likely to be accessed by children to complete a Children's Risk Assessment — a structured, evidence-based evaluation of how design features and functionalities may expose children to harm. Ofcom expects that assessment to be ongoing, reviewable, and proportionate to risk. The regulatory logic and the litigation logic are converging on the same requirement: that platforms understand and document the risks their products create, and act on that understanding.
What Platforms Should Be Doing Now
The immediate priority for any platform likely to be accessed by children is to ensure that risk assessment is treated as a substantive governance process, not a documentation exercise. That means:
- Evaluating design features — not just content — for their potential to cause harm to child users;
- Maintaining clear records of how risk conclusions were reached and what mitigations were put in place;
- Reviewing assessments when the product changes, and keeping pace with regulatory updates.
Critically, the verdict illustrates that documentation alone is not a defence. Internal evidence showed that Meta and YouTube had some awareness of the risks their products posed to children. What the jury found wanting was not knowledge, but response. Platforms must be able to demonstrate that risk assessments are embedded in product decision-making — that identified risks are escalated, considered, and either mitigated or consciously accepted with a documented rationale. A risk assessment that sits in a compliance folder while product teams build features it would have flagged is unlikely to satisfy either a regulator or a court.
Platforms that have not yet completed a Children's Risk Assessment under the OSA, or have treated the process as a tick-box exercise, should revisit their position. The standard regulators and, increasingly, courts will apply is whether the assessment was suitable, sufficient, and acted upon.
Illuminate Tech helps platforms seamlessly build and maintain live risk assessment and governance frameworks via OSCAR, the AI-powered compliance infrastructure. To discuss your compliance position or book a demo, get in touch: hello@illuminatetech.co.uk.

