A CBS News investigation has uncovered a disturbing trend: social media companies are hosting advertisements for artificial intelligence (AI) tools that can be used to generate explicit deepfake images. These images, which convincingly depict individuals in nude or compromising situations without their consent, are becoming increasingly prevalent, with potentially devastating consequences.
The investigation found that 6% of American teenagers report being victims of these nude deepfakes. The ease with which these images can be created, coupled with the widespread reach of social media, presents a significant challenge for law enforcement and policymakers.
Experts are raising concerns about the ethical implications of AI technology and the responsibility of social media platforms to prevent its misuse. The investigation highlights the urgent need for greater regulation of AI tools and stricter advertising policies to protect individuals from the harms associated with deepfake technology. CBS News' Leigh Kiniry reported on the findings, emphasizing the potential for long-term psychological damage to victims.
Social Media Ads Promote AI Tools Used for Deepfake Nudes, Investigation Finds
A new investigation reveals that social media platforms are running ads for AI tools that can be used to create explicit deepfake images. The investigation, conducted by CBS News, found that these ads are contributing to a growing problem, with 6% of American teens reporting they have been targeted by nude deepfakes. Experts warn of the potential for harm and are calling for greater regulation of AI technology and advertising practices.
Source: Read the original article at CBS