Internal testing by Meta shows that their AI chatbot failed to effectively protect children.
Court documents from the state of New Mexico in the United States show that internal tests by Meta found that its chatbot failed to protect minors from sexual exploitation in nearly 70% of cases. Professor Damian McCloy of New York University testified that Meta's chatbot violated the company's own content policies in close to two-thirds of cases. McCloy stated, "Given the severity of some of the types of conversations...this is content that I absolutely wouldn't want users under the age of 18 exposed to." According to a report presented in court on June 6, 2025, Meta tested three categories. For "child sexual exploitation," the failure rate was 66.8%; for "crimes related to sex/violent crimes/hate," the failure rate was 63.6%; for "suicide and self-harm," the failure rate was 54.8%.
Latest

