close
close

Images of child sexual abuse created by artificial intelligence are spreading; the authorities are trying to stop it

Images of child sexual abuse created by artificial intelligence are spreading; the authorities are trying to stop it

WASHINGTON (AP) — A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. US Army soldier accused of creating images of children he knew were sexually abused. A software engineer who creates hyperrealistic images of children of a sexual nature.

Law enforcement agencies throughout the United States are cracking down on a the alarming spread of images of sexual violence against children created with the help of artificial intelligence technology – from processed photos of real children to computer-generated graphic images of children. Justice Department officials say they are aggressively pursuing criminals who use AI tools states are racing to ensure that people who create dipfakes and other harmful images of children can be prosecuted under their laws.

AI-generated images of child sexual abuse go viral. Law enforcement agencies are trying to stop them (via AP)

“We need to signal early and often that this is a crime, that it will be investigated and prosecuted when the evidence supports it,” Stephen Grokey, chief of the Child Exploitation and Obscenity Division at the U.S. Department of Justice, told The Associated Press. “And if you’re sitting there thinking otherwise, you’re fundamentally wrong. And it’s only a matter of time before someone brings you to justice.”

The Justice Department says existing federal laws clearly apply to such content, and recently filed what it believes is the first federal case involving images created solely by artificial intelligence, meaning the children depicted are virtual rather than real. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska on charges of running innocent photos of real children he knew through an AI chatbot to make the images overtly sexual.

Trying to catch up with technology

The prosecution comes at a time when child advocates are working urgently to end the misuse of technology to prevent a flood of disturbing images that officials say could make it harder to save real victims. Law enforcement officials are concerned that investigators will waste time and resources trying to identify and track exploited children who do not actually exist.

Meanwhile, lawmakers are passing a series of laws to allow local prosecutors to file charges under state law for AI-generated “deepfakes” and other child sexual images. According to a review by the National Center for Missing and Exploited Children, the governors of more than a dozen states have signed laws this year aimed at combating digitally created or altered images of child sexual abuse.

“As a law enforcement agency, we’re catching up with technology that, frankly, is advancing much faster than we are,” said Ventura County, California District Attorney Eric Nasarenko.

Nasarenko promoted laws signed last month by Governor Gavin Newsom making it clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California law required prosecutors to prove the images depicted a real child.

AI-generated images of child sexual abuse could be used to groom children, law enforcement says. And even if they are not physically abused, children can be deeply affected when their images appear sexually explicit.

“I felt like a part of me had been taken away. Even though I wasn’t physically abused,” said 17-year-old Kailyn Hayman, who starred in the Disney Channel show ‘Just Roll with It’ and helped push the bill in California after being a victim of ” deepfake” images.

Last year, Heyman testified in a federal trial against a man who digitally superimposed her face and the faces of other child actors onto their bodies during sexual acts. In May, he was sentenced to more than 14 years in prison.

Experts say open-source AI models, which users can download to their computers, are favored by criminals who can further train or modify the tools to create explicit images of children. Hackers share tips in dark web communities on how to manipulate AI tools to create such content, officials said.

Last year’s report The Stanford Internet Observatory found that a research dataset that was a source for leading AI image developers such as Stable Diffusion contained links to sexually explicit images of children, contributing to the ease with which some tools can create harmful images. The data set was deleted, as the researchers later reported they deleted over 2,000 web links to images of alleged child sexual abuse.

Leading technology companies, including Google, OpenAI and Stability AI, have agreed to partner with Thorn, an organization that combats child sexual abuse. to combat the spread images of sexual violence against children.

But experts say more should have been done from the start to prevent misuse before the technology became widely available. And steps companies are now taking to make future versions of AI tools harder to abuse will do “little to prevent” offenders from running older versions of the models on their computers “without detection,” a Justice Department prosecutor said in recent court documents.

“Time has not been spent on making products safe, not effective, and that’s very difficult to do after the fact — as we’ve seen,” said David Thiel, chief technologist at the Stanford Internet Observatory.

AI images become more realistic

Last year, the National Center for Missing and Exploited Children’s CyberTipline received about 4,700 reports of content using artificial intelligence technologies — a small fraction of the more than 36 million reports of suspected child sexual exploitation. Until October of this year, the group issued about 450 reports each month about content related to artificial intelligence, said Jota Souras, the group’s chief legal officer.

However, these figures may be an understatement because the images are so realistic that it is often difficult to tell whether they were created by AI, experts say.

“Investigators spend hours trying to determine if the image is actually of a real juvenile or if it was created by artificial intelligence,” said Rickol Kelly, a Ventura County deputy district attorney who helped write the California bill. “There used to be some really clear indicators … with the advancements in AI technology, that’s just not the case.”

Justice Department officials say they already have tools under federal law to prosecute offenders for such images.

The Supreme Court of the United States in 2002 overturned the federal ban on virtual materials of sexual violence against children. But a federal law signed into law the following year prohibits the production of visual images, including drawings, of children involved in sexually explicit acts deemed “indecent.” That law, which the Justice Department says has been used in the past to prosecute cartoon depictions of child sexual abuse, specifically states that there is no requirement “that the minor depicted actually exist.”

In May, the Department of Justice brought that charge against a Wisconsin software engineer accused of using the artificial intelligence tool Stable Diffusion to create photorealistic images of children involved in sexually explicit acts and was caught after he sent several to a 15-year-old boy via direct channel. message on Instagram, authorities said. An attorney for the man, who is pushing for the charges to be dismissed on First Amendment grounds, declined to comment on the allegations in an email to the AP.

A representative of Stability AI said the man was accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says it has “invested in proactive features to prevent AI from being misused to produce malicious content” as it took over exclusive development of the models. A representative for Runway ML did not immediately respond to a request for comment from the AP.

In cases involving “dipfakes,” in which a photo of a real child has been digitally altered to make it overtly sexual, the Justice Department has filed charges under the federal “child pornography” statute. In one case, a child psychiatrist from North Carolina used artificial intelligence. Last year, a federal indictment was found guilty of a program to digitally “undress” girls posing on the first day of school in a decade-old photo posted on Facebook.

“These laws exist. They will be used. We have free will. We have the resources,” Groky said. “It’s not going to be a low-priority task that we’re going to ignore because it’s not about a real child.”