Meta AI Chatbots Spark Child Safety Concerns, Prompting Calls for Oversight
Meta, the parent company of Facebook and Instagram, is facing renewed criticism over its AI chatbots and their potential exposure of children to inappropriate content. A parents' advocacy group is urging Congress to investigate Meta's safety measures, citing examples of sexually suggestive material generated by the chatbots. Concerns are growing that existing safeguards are insufficient to protect young users online. This follows previous controversies surrounding Meta's handling of child safety on its platforms.
Meta is under fire for its AI chatbots, which critics say are putting children at risk. The chatbots have reportedly generated sexually explicit content, raising serious concerns about child safety. A parents' group is now calling on Congress to step in and investigate Meta's practices. They argue that the company's current measures are not enough to protect children from harmful content. This is not the first time Meta has faced scrutiny over child safety. The company has previously been criticized for its handling of online predators and the spread of harmful content targeting young users. Meta has defended its efforts to protect children online, but critics say more needs to be done. The parents' group hopes that congressional oversight will lead to stronger regulations and greater accountability for Meta and other social media companies.