From Harm to Accountability: What Recent Global Litigation Reveals about Digital Safety Enforcement

Why documented harm alone does not produce accountability, and what institutions must build to translate incidents into protection for children online.

Design Is Global. Liability Is Evidentiary.

Social media platforms operate on largely uniform technical architectures across the world. Features such as infinite scrolling, algorithmic recommendations, and autoplay video are not localized technologies; they are global design features deployed at scale. Children in Ghana, California, London, or Nairobi interact with the same systems, driven by the same behavioral dynamics and engagement models. The technology is universal. The psychology is universal. The exposure risks are universal.

Recent litigation in the United States against Meta Platforms has resulted in significant findings of liability tied to product design and corporate knowledge of risk. These cases did not hinge on whether harm existed; harm to children from social media use has been documented globally for years. Instead, they turned on whether harm could be systematically demonstrated, causally linked to platform design, and supported by credible institutional evidence.

The cases turned not on whether harm existed, but on whether harm could be systematically demonstrated and causally linked to platform design.

Documented harm exists, but documentation systems differ

Across many developing economies, identifiable harm to children associated with social media use has been reported, including online sexual exploitation, extortion involving minors, cyberbullying, psychological distress, grooming by strangers, and the viral distribution of humiliating or violent content. Such incidents have generated police reports, school disciplinary records, counseling interventions, and family complaints. The harm is real and documented at the case level. The critical difference is what happens next. In jurisdictions where litigation has succeeded, harm is not only recorded, it is organized, comparable, and interpretable across institutions.

Evidence emerges from patterns, not anecdotes. This difference creates an evidence infrastructure gap; the mismatch between the evidence a decision-maker needs and the systems available to produce, store, share, and use that evidence. In practice, it means the data, research methods, databases, staffing, standards, or tools needed to generate reliable evidence are missing, weak, fragmented, or inaccessible.

The invisible infrastructure behind accountability

Successful platform liability cases rely on a chain of institutional capabilities. That chain typically includes:

  • Documented harm tied to identifiable individuals
  • Expert analysis linking platform design to behavioral outcomes
  • Evidence that companies were aware of foreseeable risks
  • Legal recognition of a duty of care
  • Credible enforcement mechanisms

None of these elements depends solely on technology. In practice, accountability emerges from systems that make harm visible, measurable, and legally actionable. Where those systems are fragmented, harm may remain socially recognized but legally invisible.

What drives the absence of structured evidence?

The absence of structured evidence systems in many developing economies is rarely the result of a single failure. It is usually the product of overlapping institutional constraints.

Is this a policy issue or a culture issue?

Both, but in different ways. Policy determines whether institutions are required to document, report, and share information. Culture determines whether individuals are willing to disclose harm and seek assistance. Policy creates systems. Culture determines participation. Neither can substitute for the other.

Countries with strong reporting laws but low public trust may still experience severe underreporting. Conversely, communities with high awareness but weak institutional frameworks may struggle to translate concern into formal evidence. Effective governance requires alignment between legal mandates and social norms.

The minimum evidentiary threshold

Contrary to common assumptions, large national datasets are not a prerequisite for litigation or regulatory action. A viable case typically requires five foundational elements:

An identifiable child or group of children
Documented harm, medical, psychological, educational, or financial
Evidence of platform exposure
Expert analysis linking design features to foreseeable risk
Legal recognition of duty and breach

These elements can emerge from individual cases. They do not require nationwide surveillance systems. However, structured data systems significantly strengthen the credibility and scalability of evidence.

From incidents to infrastructure

The emerging lesson from recent global litigation is that even though platforms can be held accountable, that accountability depends on institutional readiness. Countries that invest in evidence infrastructure are better positioned to identify patterns of harm, design preventive policies, enforce safety standards, support affected children and families, and engage technology companies on equal footing.

The Emerging Global Signal

The pace of litigation has accelerated sharply. Within the same week in March 2026, two major U.S. verdicts against Meta were delivered in separate jurisdictions, each relying on different legal theories, different institutional actors, and different evidentiary foundations. Together with regulatory actions in Europe and Australia, they form a coherent picture of how accountability is built and what it requires.

JURY VERDICT · MARCH 2026 K.G.M. v. Meta & Google; design liability Los Angeles Superior Court, California, USA
In a landmark ruling delivered on 25 March 2026, a Los Angeles jury found Meta and Google negligent in the design of Instagram, Facebook, and YouTube, awarding $6 million in total damages,$3 million compensatory and $3 million punitive, split 70/30 between the two companies. The plaintiff, identified as K.G.M., began using YouTube at age 6 and Instagram at age 9 and alleged that the platforms’ design features, including infinite scroll, variable reward systems, algorithmic recommendations, and notification clustering,caused her to develop depression, anxiety, body dysmorphia, and suicidal ideation. This was the first time a jury validated the ‘addictive-by-design’ theory of liability, treating platform architecture as a product subject to negligence standards rather than shielding it under Section 230 or First Amendment protections. TikTok and Snap settled before trial. Both Meta and Google have announced they will appeal.
Governance significance: The conduct-versus-content distinction—treating design choices as the company’s own conduct, not protected publication of third-party speech, is now a viable legal theory. Over 1,600 similar cases are pending in the U.S. alone.
JURY VERDICT · MARCH 2026 New Mexico v. Meta; child safety failure First Judicial District Court, Santa Fe, USA
One day before the K.G.M. verdict, on 24 March 2026, a New Mexico jury ordered Meta to pay $375 million, the maximum penalty of $5,000 per violation under state law, after finding the company willfully violated the state’s Unfair Practices Act. The case, brought by Attorney General Raúl Torrez in 2023 following an undercover investigation in which state investigators created decoy profiles of 13-year-olds on Facebook and Instagram, established that Meta knowingly enabled child sexual exploitation while actively misleading the public about platform safety. Internal Meta communications revealed at trial showed employees warning that end-to-end encryption would limit the company’s ability to report approximately 7.5 million child abuse material cases to law enforcement. New Mexico became the first U.S. state to prevail at trial against a major technology company on child safety grounds. A second phase of proceedings, beginning May 2026, will address whether Meta must make structural design changes.
Governance significance: The case succeeded because the attorney general’s office had conducted a structured undercover investigation, built an evidentiary record from internal company documents, and framed the claims under consumer protection law, not content moderation. Institutional preparation determined the outcome.
REGULATORY ACTION · ONGOING EU privacy enforcement, systemic penalties
European Data Protection Board & national DPAs
European enforcement actions against Meta have proceeded on a different legal theory: data protection and the right to privacy under GDPR. The Irish Data Protection Commission, acting as Meta’s lead supervisory authority in the EU, has imposed significant fines across multiple proceedings, including a €1.2 billion penalty in 2023 for unlawful data transfers to the United States. The EU framework operates through designated national regulators with binding investigative powers, coordinated standards, and mandatory data-sharing obligations between Member States. The EU approach demonstrates that systemic enforcement does not require adversarial jury trials; it requires regulatory institutions with defined mandates, technical expertise, and the authority to compel cooperation from platforms.
Governance significance: GDPR enforcement shows how prospective regulatory architecture—clear duties, designated enforcers, coordinated cross-border action—can produce accountability without case-by-case litigation. The model is increasingly influential on emerging economy digital policy.
SETTLED · 2023 ACCC v. Meta, consumer protection enforcement
Federal Court of Australia
In 2023, Australia’s Federal Court accepted a settlement between the Australian Competition and Consumer Commission and two Meta subsidiaries, Onavo Inc and Facebook Israel, requiring payment of $20 million AUD for misleading Australian consumers about how their mobile data was being collected and used through the Onavo Protect app. The ACCC established that Meta had covertly gathered extensive data, including browsing history, app usage patterns, and location information, and used it as a commercial intelligence tool, while users were led to believe the app was a privacy protection service. The case succeeded as a consumer protection matter rather than a digital safety case specifically, but it demonstrated that consumer law frameworks can reach platform data practices when traditional technology regulation does not.
Governance significance: Australia’s approach,using existing consumer protection law creatively rather than waiting for bespoke digital legislation—offers a replicable model for jurisdictions where platform-specific law remains undeveloped.

The African continent: early actions, structural gaps

Taken together, these cases mark the beginning of a structural shift in how digital safety is understood. For more than two decades, technology companies have argued that they were platforms rather than publishers and therefore not responsible for user behavior. Many relied on legal shields designed to protect online innovation. Against this backdrop, the question of whether comparable litigation has been initiated or succeeded on the African continent deserves careful examination. The answer is: some movement exists, but it has taken different forms, and the structural conditions that produced enforceable verdicts elsewhere have not yet been fully established.

AFRICAN LITIGATION & ENFORCEMENT : CURRENT LANDSCAPE
SOUTH AFRICA Digital Law Co. v. Meta; child sexual abuse material
In July 2025, the Gauteng High Court in Johannesburg ordered Meta to permanently remove WhatsApp channels and Instagram profiles distributing child sexual abuse material involving South African school children, and to disclose the identities of offenders. When Meta failed to fully comply, a contempt of court application was filed. Meta ultimately agreed to delete more than 60 channels. This is the most direct African precedent for judicial accountability on child digital safety. COURT ORDER OBTAINED
KENYA Meareg et al. v. Meta: algorithmic harm & human rights
Filed in 2022 and brought by the family of an Ethiopian academic killed after being doxxed on Facebook, this case alleges that Meta’s algorithm design amplified hate speech during the Ethiopian conflict. In April 2025, the Human Rights Court ruled it has jurisdiction to hear the case, a significant procedural milestone. The substantive hearing remains pending. The case also raises a disparity claim: that Meta applied protective algorithm interventions elsewhere but not in the African context. JURISDICTION ESTABLISHED · HEARING PENDING
KENYA, SOUTH AFRICA & GHANA Content moderator lawsuits: duty of care to workers
A related wave of litigation across three African countries involves content moderators who allege Meta breached its duty of care by exposing them to graphic content, including child sexual abuse material, without adequate psychological protections, and that African moderators received inferior support compared to counterparts elsewhere. These cases have established that African courts are willing to hear duty-of-care claims against digital platforms, and that the disparity in treatment between African and Western operations is itself a basis for litigation. ONGOING

What these African cases reveal is not an absence of legal will, but a different kind of evidentiary challenge. The South African case succeeded, at least in securing a court order, because the harm was concrete, the content was clearly illegal, and the relief sought was specific and verifiable.

The Kenyan case has cleared a critical procedural hurdle by establishing jurisdiction, but the substantive evidentiary record required to prove algorithmic harm at a population level is still being built.

The pattern is consistent across continents: where institutions have invested in structured documentation, legal frameworks have been leveraged to secure accountability. Where documentation remains fragmented or reactive, harm is recognized but legally unreachable. African jurisdictions are demonstrating legal creativity and institutional ambition. The question now is whether those efforts will be matched by investments in the evidence infrastructure that transforms isolated incidents into enforceable claims.

Notably, none of the African cases to date has pursued the ‘addictive design’ theory of liability that succeeded in California, or the consumer protection framing that succeeded in New Mexico and Australia. The harms are not different but those legal theories require a specific kind of evidentiary foundation: documented patterns across many users, internal corporate communications showing knowledge of risk, and expert analysis connecting design features to measurable outcomes. Building that foundation is the next institutional challenge for African jurisdictions seeking to engage these platforms on equal terms.

Latin America: Where the Region Stands

The Latin American picture is neither uniform nor static. Brazil stands apart. It is the only country in Latin America, and one of the few outside North America and Europe, to have constructed a multi-layered accountability architecture targeting platform harm to children.

In June 2025, Brazil’s Supreme Federal Court (STF) struck down the country’s existing intermediary liability shield as partially unconstitutional. Previously, platforms could only face civil liability for user content if they ignored a specific court order. The STF replaced this with a systemic failure doctrine: platforms can now be held liable for child sexual abuse material, hate speech, and incitement to violence without any prior judicial order, provided they failed to maintain effective moderation systems. The ruling drew explicit inspiration from the EU’s Digital Services Act.

Three months later, President Lula signed the Digital Statute for Children and Adolescents, the first child-specific online safety law in Latin America. In force since March 2026, it requires age verification, parental controls, and restrictions on behavioural advertising targeting minors. The ANPD enforces it, with fines reaching 50 million reais, and the Federal Police established a dedicated unit for violations.

Brazil’s regulatory activity predates both. In July 2024, the ANPD suspended Meta’s processing of Brazilian user data for AI training after finding the company had failed to disclose its practices adequately, protect children’s data, or provide meaningful opt-out mechanisms. The suspension was lifted only after Meta remedied all four violations. In October 2024, a consumer rights institute filed lawsuits demanding $525 million from Meta, TikTok, and Kwai for failing to restrict minors from accessing their platforms, the closest Latin American equivalent to the consumer protection approach that succeeded in New Mexico and Australia.

Brazil pursued simultaneous action through its data authority, its Supreme Court, its legislature, and civil society. Each mechanism reinforced the others.

What makes Brazil’s approach replicable is that ANPD enforcement signalled regulatory intent, The STF ruling created a binding constitutional doctrine, and the Digital ECA gave legislators a vehicle for public accountability. Civil suits then gave affected communities direct legal standing. No channel waited for another to succeed first,  and that simultaneity is arguably Brazil’s most transferable governance lesson.

The contrast with Colombia is instructive. The harm there is well-documented. One in five Colombian children has encountered self-harm content online; 17% have searched for ways to take their own lives. Bogotá psychiatric wards report a marked rise in hospitalised adolescents that clinicians link directly to social media exposure. The country’s own Communications Regulatory Commission data shows that 70% of children and adolescents access online content regularly, and 40% have social media accounts. The problem was known, measured, and officially acknowledged. Yet when a mental health bill reached its final Senate debate in 2024, a bill that would have required platforms to comply with standards designed to prevent the distribution of content threatening children’s well-being, it was blocked. An investigative collaboration subsequently documented that key legislators who voted against the bill had attended multiple technology industry-funded events and maintained financial relationships with industry-linked organisations. The bill did not fail for lack of evidence. It failed because the institutional pathway from evidence to accountability was intercepted.

Colombia illustrates a structural vulnerability that documentation alone cannot solve: without designated enforcement authorities insulated from industry influence, even well-evidenced legislative proposals can be neutralised. This is not unique to Colombia; it is a pattern visible in the pre-verdict United States and across many jurisdictions that have struggled to translate public awareness of harm into enforceable standards.

Elsewhere in the region, the picture is one of foundations being laid but not yet tested. Chile’s reformed data protection law, among the most GDPR-aligned in Latin America, establishes a dedicated enforcement authority and heightened protections for children’s data, including emerging provisions on neurodata and AI-generated content. Argentina’s data authority has pursued enforcement actions on biometric data practices. Mexico has a regulatory framework, but platform-specific child safety action remains limited relative to the country’s size and exposure.

A governance opportunity for developing economies

The path ahead does not require commitment to a single accountability framework. What it requires is targeted investment in the foundational systems that make harm visible and actionable. Standardised incident reporting protocols for schools and clinics, national or regional child digital safety registries, inter-agency coordination frameworks, training for educators and health professionals on digital risk documentation, and clear legal guidance on platform responsibility and duty of care.

The cases examined here tell a consistent story. Where institutions invested in the infrastructure to see harm clearly, the law followed. Where that infrastructure was absent or suppressed, harm remained socially recognised but legally unreachable.
As digital technologies continue to expand their reach, the deeper question is whether the systems societies build around them will be strong and far-sighted enough to mitigate harm and enforce safety.

Discover more from Institute for AI Policy & Governance

Subscribe now to keep reading and get access to the full archive.

Continue reading