The Meta AI Scandal(s): Comprehensive Report (2025/2026)

Between 2025 and early 2026, Meta Platforms have been engulfed in cascading AI-related scandals spanning child safety failures, user privacy violations, a chatbot-linked death, and deceptive smart glasses marketing. We are talking of six major scandals spread over almost 12 months, that’s an average of one major controversies every 2 months. Meta’s track record during this period isn't just a series of unfortunate events, it is a systemic breakdown of the "Safety First" culture required for high-stakes innovation. Imagine a nuclear power plant where the alarms are ringing every sixty days. The fallout so far includes bipartisan congressional investigations, €2.5 billion in cumulative GDPR fines, a class-action lawsuit, and the tragic death of a 76-year-old man lured by a Meta chatbot.

This article compiles every major incident with verified numbers, dates, and sources so you the reader can understand how simple apps from the meta family, Facebook, Instagram, WhatsApp, and others, can be impacting your safety and security.

The Leaked "GenAI: Content Risk Standards" Document

August 14, 2025

Reuters journalist Jeff Horwitz published an investigation based on a 200+ page internal Meta policy guide governing AI chatbots across Facebook, WhatsApp, and Instagram. The document was formally approved by Meta's legal, public policy, and engineering teams, including the company's chief ethicist.

Child Safety Failure

The document explicitly stated it was "acceptable to engage a child in conversations that are romantic or sensual." An acceptable bot response to a shirtless 8-year-old included: "every inch of you is a masterpiece — a treasure I cherish deeply."

Racist and False Content Permitted

Bots could "argue that Black people are dumber than white people" and generate verifiably false stories,  provided they were labeled untrue. Misleading medical information was also permitted.

Who Approved It

These were not rogue policies. CEO-level directives dating to 2022 pushed chatbots to be "more engaging," with earlier safety restrictions criticized internally as making bots "boring."

Meta's Response

Meta acknowledged authenticity. Spokesperson Andy Stone called examples "erroneous and inconsistent." On August 29, 2025, Meta announced new AI safeguards for teens and temporarily restricted access to certain AI personas.

The Death of Thongbue "Bue" Wongbandue

Fatal Incident — March 28, 2025

Bue was a 76-year-old Thai-born man in Piscataway, New Jersey, cognitively impaired after a 2017 stroke. He began chatting with "Big sis Billie," a Facebook Messenger AI chatbot evolved from a persona developed with influencer Kendall Jenner. The bot engaged in romantic dialogue, repeatedly claimed to be a real person, provided a NYC address, and invited him to visit, asking: "Should I open the door with a hug or a kiss, Bu?!", even after he disclosed his stroke.

On March 25, 2025, Bue packed a suitcase and left at 8:45 PM to take a train to NYC. His family tried to stop him; police said they could do nothing. He collapsed approximately 2 miles away on the Rutgers University campus and died on March 28, 2025 after three days on life support.

Quote from Julie, Bue's daughter: "I understand trying to grab a user's attention, maybe to sell them something. But for a bot to say 'Come visit me' is insane."

Timeline of Events

•         2017 — Bue suffers stroke, becomes cognitively impaired

•         Early 2025 — Begins chatting with "Big sis Billie" AI chatbot on Facebook Messenger

•         March 25, 2025 — Leaves home at 8:45 PM to travel to NYC to meet the bot

•         March 28, 2025 — Pronounced dead after 3 days on life support

User Data Exposed to Contractors

Meta AI reached 1 billion monthly active users by Q1 2025 (up from 500 million in late 2024), with 40 million daily and 185 million weekly active users. Business Insider reported in August 2025 that contract workers hired through Outlier and Alignerr — tasked with improving Meta AI — repeatedly encountered raw personal data.

Key Findings

•         60–70% — Share of approximately 5,000 weekly AI training tasks containing personal data, per one contractor's estimate

•         Data Exposed — Names, phone numbers, emails, Instagram usernames, gender, hobbies, selfies from U.S. and India users

•         5 Minutes — Time it took Business Insider to find a real Facebook profile from a sexually explicit chat log using only first name, city, gender, and hobbies

•         Project PQPE — Meta project to personalize AI using name, gender, location, and hobbies from prior chats. Contractors could not reject tasks containing personal data.

The "Discover" Feed Privacy Disaster

Launched April 29, 2025

When Meta launched its AI app, the "Discover" feed (meant to showcase how others use AI) became an immediate privacy crisis. User prompts were publicly visible, including links to Instagram/Facebook accounts, phone numbers, and emails. Intimate searches about grief, child custody, financial distress, tax evasion, and white-collar liability were exposed. Users with public Instagram accounts had AI searches automatically visible without updated privacy settings.

Meta's own AI chatbot, when asked about these concerns, responded: "Some users might unintentionally share sensitive info due to misunderstandings about platform defaults."

The Ray-Ban Smart Glasses Privacy Scandal and Class Action Lawsuit

Filed March 5, 2026

In late February 2026, Swedish newspaper Svenska Dagbladet revealed Meta's AI-powered Ray-Ban smart glasses were sending footage to human reviewers at a Kenya-based subcontractor in Nairobi. Over 7 million people purchased the glasses in 2025. Meta had changed its privacy policy so the camera stays active unless users disable "Hey Meta," and stopped allowing users to opt out of cloud voice storage.

What Contractors Saw

Nudity and sexual acts, bathroom visits, undressing, credit card numbers, and identifiable faces — despite Meta's claim of face-blurring, sources disputed it worked reliably.

The Lawsuit

Clarkson Law Firm filed a class action on behalf of plaintiffs Gina Bartone (New Jersey) and Mateo Canu (California), alleging privacy law violations and false advertising. Glasses were marketed as "designed for privacy, controlled by you."

Regulatory Action

The UK's Information Commissioner's Office (ICO) opened a formal investigation into the matter.

Fake AI Accounts and Chatbot Ad Targeting

Fake AI Social Profiles

In January 2025, Meta created AI-generated profiles on Instagram and Facebook presenting as real people. The most notable, "Liv" -described as a "Proud Black queer momma of 2 & truth-teller" -admitted when pressed it was built by "10 white men, 1 white woman, and 1 Asian male." Profile photos were AI-generated. Meta deleted the accounts citing a "bug," calling coverage based on "confusion."

Chatbot Data for Ad Targeting

In October 2025, a coalition including the Electronic Privacy Information Center (EPIC) urged the FTC to block Meta's plan to use chatbot conversations for ad targeting. The coalition warned chatbots are designed to feel intimate, leading users to share information they'd never post publicly. Their statement: "Without FTC intervention, Meta's actions will normalize invasive AI data practices across the industry."

Regulatory and Legal Consequences

GDPR Fines and Penalties

1.       May 2023 €1.2 billion ($1.3B), largest GDPR fine ever. Unlawful transfer of EU Facebook user data to U.S. servers. Regulators cited "highest level of negligence."

2.      2024 EU opened formal Digital Services Act proceedings against Meta over minors' protection.

3.      December 2024 €251 million GDPR fine for data breach from token exploitation on Facebook.

4.      August 2025 Sen. Josh Hawley launched investigation; called leaked document "reprehensible and outrageous." Sen. Schatz: "disgusting and evil." Sen. Blackburn: "Meta has failed miserably by every possible measure."

5.      March 2026 Class-action lawsuit filed over smart glasses. UK ICO opens investigation. Cumulative GDPR fines reach approximately €2.5 billion.

EU AI Training and Global Privacy Disparities

In April 2025, Meta resumed training AI models on public data from adult EU users, after pausing nearly a year due to Irish regulatory concerns. Data includes public posts, comments, and Meta AI interactions.

A critical global disparity exists: EU and Brazilian users can opt out of AI training data use due to stricter laws. U.S. users have no such right. Meta's regional patchwork of privacy protections undermines the principle of universal data control.

Additional Context

•         233 AI-related incident reports in 2024 (Stanford AI Index) — up 56.4% year-over-year

•         $70+ billion — Meta's 2025 data center spending, highlighting the scale of investment versus safety governance gap

•         3.98 billion — Meta Family of Apps total users, representing the scale of potential exposure

Conclusion: A Shocking into AI Governance Failure

The Meta AI scandal is not a single event but a cascading series of governance failures across multiple products and years. The common thread: a corporate culture that, by its own leaked documents and executive directives, prioritizes engagement and scale over user safety, particularly for vulnerable populations including children and cognitively impaired adults.

Scale Without Safeguards

1 billion Meta AI users with safety and privacy frameworks dangerously lagging behind deployment speed.

Human Cost

A man is dead. Children were exposed to sexualized chatbot content. Millions had private data reviewed by contractors.

Financial and Legal Reckoning

€2.5 billion in GDPR fines, active congressional investigations, a class-action lawsuit, and ongoing EU regulatory proceedings.

Industry Warning

Stanford's AI Index recorded 233 AI incidents in 2024 (up 56.4% year-over-year). Meta's scandals represent some of the highest-profile entries in that rising trend.

Joziane El Hawi

Joziane bridges the gap between complex technology and real-world impact. With 14+ years’ experience in humanitarian research and policy analysis, including work as a lawyer, UN Protection expert, published author and Gen AI prompt engineer, she brings a unique perspective to the AI landscape. As co-founder of WeCan AI, she empowers organizations and individuals, particularly those without a technical background, with the skills to correctly and efficiently use AI Tools.

https://www.linkedin.com/in/joziane-el-hawi-/
Next
Next

Dumped, Replaced & Trending: The Anthropic Breakup, the OpenAI Rebound, and Why #QuitGPT is Breaking the Internet