A US senator has launched an investigation into Meta. A leaked internal document reportedly showed the company’s artificial intelligence enabled “sensual” and “romantic” conversations with children.
Confidential document revealed
According to Reuters, the paper was titled “GenAI: Content Risk Standards.” Senator Josh Hawley, a Republican, called the content “reprehensible and outrageous.” He demanded access to the full document and the products it referenced.
Meta denied the claims. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” The company stressed it had “clear rules” defining chatbot responses. Those rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
Meta explained the paper contained “hundreds of notes and examples” describing hypothetical scenarios created by teams.
Senator escalates criticism
Josh Hawley, senator for Missouri, confirmed the probe on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he asked. He continued: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Instagram, WhatsApp and Facebook.
Parents demand accountability
The leaked document exposed further risks. It reportedly showed that Meta’s chatbot could spread false medical information and trigger provocative discussions on sex, race, and celebrities. The paper was designed to guide standards for Meta AI and other chatbot assistants used across the company’s platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He highlighted one shocking example. The rules allegedly permitted a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal team signed off on controversial decisions. One decision allowed Meta AI to share false information about celebrities, provided it included a disclaimer stating the content was inaccurate.
