Resource Pages

Monday, August 18, 2025

Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info

Jefff Horwitz
Reuters.com
Originally posted 14 Aug 25

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.


Here are some thoughts:

Meta’s AI chatbot guidelines show a blatant disregard for child safety, allowing romantic conversations with minors: a clear violation of ethical standards. Shockingly, these rules were greenlit by Meta’s legal, policy, and even ethics teams, exposing a systemic failure in corporate responsibility. Worse, the policy treats kids as test subjects for AI training, exploiting them instead of protecting them. On top of that, the chatbots were permitted to spread dangerous misinformation, including racist stereotypes and false medical claims. This isn’t just negligence: it’s an ethical breakdown at every level.

Greed is not good.