
Federal prosecutors have charged a Wisconsin man for allegedly using a popular artificial intelligence image generator to create thousands of explicit images of children, marking what is potentially the first federal charge of creating child sexual abuse material applied to images produced entirely through AI.
In a statement Monday afternoon, the Justice Department said it has charged Steven Anderegg, 42, of Holmen, Wis., with using the AI image generator Stable Diffusion to create over 13,000 fake images of minors, many of which depicted fully or partially nude children touching their genitals or engaging in sexual intercourse with men.
A Justice Department official told The Washington Post it was the first case the department was aware of involving a person suspected of using AI to fully generate child sexual abuse material, known as CSAM. The images did not show real children and were made by typing strings of text into an image generator, underscoring what Justice officials have long argued: that a 2003 law banning photorealistic fake and obscene images applies to AI.
Advertisement
In two other recent cases, investigators said men in North Carolina and Pennsylvania had used AI to superimpose children’s faces into explicit sex scenes, creating what’s known as a deepfake, or to digitally remove the clothing from children’s real photographs.
The arrest comes as AI-generated CSAM, commonly known as child pornography, floods the web with help from synthetic-image-making software. The tools are being increasingly promoted on pedophile forums as a way to create uncensored and highly photorealistic sexual depictions of children, child safety researchers told The Post.
The case also highlights a little-tested legal avenue that federal officials have said they intend to pursue in future cases, arguing that AI-invented images should still be treated in a similar manner as child sex abuse recorded in the real world.
Advertisement
“The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material — or CSAM — no matter how that material was created,” Deputy Attorney General Lisa Monaco said in a statement. “Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”
Anderegg’s lawyer declined to comment.
Ella Irwin, a senior executive at Stability AI, which makes Stable Diffusion, told The Post in a statement that the materials were created using an earlier version of their tool, originally released by another AI company, called Runway. That company did not immediately return a request for comment.
Runway and Stability AI collaborated to design the image generator, which was based on a project by AI researchers at the Ludwig Maximilian University of Munich and released in 2022.
Advertisement
Stability has said it implements safeguards to prevent the misuse of its AI, including filters that block “unsafe prompts.” But versions of the software are offered as “open source,” allowing users to download the tools to their computers and run them without the filters.
“While AI companies have pledged to make it more difficult for offenders to use future versions of GenAI tools to generate images of minors being sexually abused, such steps will do little to prevent savvy offenders like the defendant from running prior versions of these tools locally from their computers without detection,” a Justice Department official wrote in a legal brief.
In September, Anderegg posted an Instagram story of a realistic AI-generated image of minors wearing bondage-themed leather clothes and wrote a message encouraging others to “[c]ome check out what [they] are missing” on the messaging app Telegram, federal court documents said.
Advertisement
In one exchange on Instagram, investigators said, Anderegg allegedly talked with a 15-year-old boy and described how he used Stable Diffusion to convert text prompts into images of minors. He also sent him several AI images of children displaying their genitals, court documents said.
Meta, which owns Instagram, flagged the account to the National Center for Missing and Exploited Children, which runs a database that companies use to report and block child-sex material. In November, the organization provided two “CyberTip” reports to Wisconsin law enforcement flagging Anderegg’s account. In February, law enforcement executed a search warrant at Anderegg’s home.
During the search, Anderegg confirmed being familiar with Stable Diffusion, court documents said. A review of his electronic devices confirmed that he’d installed the Stable Diffusion program, along with specialized add-ons to produce genitalia, investigators added. “Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images,” the documents noted.
Advertisement
Law enforcement arrested Anderegg, who according to his LinkedIn profile previously worked at Oracle, last week, the Justice Department added.
Wisconsin county prosecutors filed documents in February charging Anderegg with “exposing a child to harmful material” and “sexual contact with a child under age 13,” as earlier reported by Forbes. He pleaded not guilty, and was released on a $50,000 bond, news reports show.
Anderegg’s case highlights what legal and child safety experts call a child porn crisis fueled by AI-image generators.
As the technology improves, it’s increasingly able to create hyper-realistic child sex abuse images, regardless of efforts to implement guardrails, researchers and legal experts said.
In December, the Stanford University’s Internet Observatory found at least 1,008 images of child exploitation in a popular open-source database of images, called LAION-5B, that AI image-generating models such as Stable Diffusion rely on to create hyper-realistic photos.
Advertisement
But legal experts said it’s promising that federal officials are finding existing laws to charge alleged creators of AI-generated child porn.
The 2003 law cited by federal investigators in the indictment, known as the Protect Act, incorporated several child safety measures, such as the national coordination of Amber Alerts.
One of the law’s provisions bans computer-generated imagery depicting a subject who “appears virtually indistinguishable from that of a minor engaging in sexually explicit conduct.”
Daniel Lyons, a law professor at Boston College who has studied the issue, said investigators’ citing of the law, and their success in convincing a federal grand jury to support the indictment, showed the potential for the law to take on what he said was “a significant public health problem.”
“I’m not surprised, and it’s long overdue, given the number of images that are apparently already circulating on the dark web,” he said.
Child safety investigators and the National Center for Missing and Exploited Children are “going to be overwhelmed pretty soon with AI-generated images,” he added. “Having to distinguish between what’s computed-generated and what’s real will be a huge challenge.”
Anderegg is in federal custody pending a detention hearing scheduled for Wednesday. If convicted, he faces five to 70 years in prison.
ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZMGmr8enpqWnl658c3yRbWZpbV9nfnCwzqNkmqqimsC1ecCiZJyrkaJ6pLTIpZtmq5WtwqK4jJqZrquVYraurcaeqmg%3D