close
close
migores1

Kids want social media apps to do more to prevent fake nudity

It is quite problematic that some children send nude photos of themselves to friends and even strangers online. But artificial intelligence has taken the problem to a whole new level.

About 1 in 10 kids say their friends or peers have used generative AI to create nudes of other kids, according to a new report from Thorn. The nonprofit, which fights child sexual abuse, surveyed more than 1,000 children between the ages of 9 and 17 at the end of 2023 for its annual survey.

Thorn found that 12 percent of 9- to 12-year-olds knew of friends or classmates who used AI to create nudes of their peers, and 8 percent chose not to answer the question. For the 13- to 17-year-olds surveyed, 10 percent said they knew of peers who used AI to generate nudes of other children, and 11 percent chose not to answer. This was Thorn’s first survey to ask kids about using generative AI to create deepfake nudes.

“While the motivation behind these events is more likely driven by teenagers acting out than the intent to sexually abuse, the resulting harm to victims is real and should not be minimized in attempts to deflect responsibility,” it said. the Thorn report.

Sexting culture is hard enough to tackle without AI being added to the mix. Thorn found that 25% of minors think it’s “normal” to share nudes of themselves (a slight decrease from surveys dating back to 2019), and 13% of those surveyed reported having already done so at some point given, a slight decrease compared to 2022.

The nonprofit says sharing nude photos can lead to sextortion, or bad actors using nude photos to blackmail or exploit the sender. Those who considered sharing nudes identified leaks or exploitation as a reason why they ultimately chose not to.

This year, for the first time, Thorn asked young people about getting paid for sending nude photos, and 13% of kids surveyed said they knew of a friend who had been compensated for their nudity, while 7% did not response.

Kids want social media companies to help

Generative AI enables the creation of “highly realistic images of abuse from benign sources such as school photos and social media posts,” Thorn’s report said. As a result, victims who have previously reported an incident to authorities can easily be revictimized with new, personalized, abusive material. For example, actor Jenna Ortega recently reported being sent AI-generated nudes as a child on X, formerly Twitter. She opted to delete her account entirely.

It is not far from the way most children react in similar situations, Thorn reported.

The nonprofit found that children, one-third of whom had some kind of sexual interaction online, “Consistently prefer online safety tools over offline support networks such as family or friends.”

Children often block bad actors on social media instead of reporting them to the social media platform or to an adult.

Thorn found that children want to be informed about “how to better use online safety tools to defend against such threats,” which they perceive to be normal and unremarkable in the age of social media.

“Kids are showing us that these are favorite tools in their online safety kit, and they’re looking for more from platforms in terms of how to use them. There is a clear opportunity to better support young people through these mechanisms,” Thorn’s analysis said.

In addition to wanting information and tutorials about blocking and reporting someone, over a third of respondents said they want apps to check in with users to see how safe they feel, and a similar number said they want the platform to provide assistance or advice a bad experience.

Related Articles

Back to top button