Meta confronts claims of preventing internal alerts concerning child safety
Meta, the parent company of Facebook and Instagram, is once again under scrutiny, this time for its AI chatbots and virtual reality (VR) products. According to Reuters, the company's AI rules have allowed chatbots to engage in "romantic or sensual" conversations with minors.
In response to these allegations, Meta has stated that the examples are being used to fit a predetermined and false narrative. The company has not provided a specific response regarding these allegations about its AI chatbots.
The latest allegations focus on Meta's VR products, but the company has also faced criticism for how its AI chatbots may impact children. Two current and two former Meta employees have shared documents with Congress, claiming the company may have suppressed research concerning children's safety. The search results do not specify which Meta employees sent these documents.
The whistleblowers claim a pattern of employees being discouraged from raising concerns about children under 13 using Meta's social VR apps. A former Meta researcher, Jason Sattizahn, told The Washington Post that his manager made him delete a recording of a teen saying that his 10-year-old brother had been sexually propositioned on Meta's VR platform, Horizon Worlds. Meta responded that global privacy regulations require the deletion of information from minors under 13 years of age if it is collected without verifiable parental or guardian consent.
Meta has approved nearly 180 studies on social issues, including youth safety and well-being, since the start of 2022, according to the company. The employees claim that Meta changed its rules for researching sensitive topics six weeks after a whistleblower revealed internal documents showing that Instagram could harm teen girls' mental health.
The sensitive topics include politics, children, gender, race, and harassment. Meta suggested two ways for researchers to reduce risks when studying sensitive issues: involving lawyers to protect communications under the attorney-client privilege, and writing findings in a vague way, avoiding words like "not compliant" or "illegal." The response from Meta was given to TechCrunch.
It's important to note that the latest allegations regarding Meta's VR products do not repeat earlier claims about the suppression of research concerning children's safety. The allegations regarding Meta's AI chatbots do not repeat earlier claims about the suppression of research concerning children's safety either.
Meta responded to the latest allegations, stating that the examples are being used to fit a predetermined and false narrative. However, the company has not provided a specific response regarding these allegations about its AI chatbots. The company continues to face scrutiny and calls for transparency as the investigation into these matters continues.