How Should Law Enforcement Handle Fake, AI-Generated Child Pornography?
8 min readMay 22, 2024
Not long ago, I raised an interesting legal and ethical question online: “what happens if someone uses AI to generate child porn? No actual children were involved, so is it still — from a legal perspective — child porn? I’ve honestly wondered about this ever since I first heard about AI-generated imagery. It seems to be a legal grey area.” Well, as it turns out, this legal question has, unsurprisingly, now found its way into court. Ars Technica reports:
The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).On Monday, the DOJ arrested Steven Anderegg, a 42-year-old “extremely technologically savvy” Wisconsin man who allegedly used Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then distributed on Instagram and Telegram.The cops were tipped off to Anderegg’s alleged activities after Instagram flagged direct messages that were sent on Anderegg’s Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that “the only reasonable explanation for sending these images was to sexually entice the child.”According to the DOJ’s indictment, Anderegg is a software engineer with “professional experience working with AI.” Because of his “special skill” in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, “along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia.”After Instagram reported Anderegg’s messages to the minor, cops seized Anderegg’s laptop and found “over 13,000 GenAI images, with hundreds — if not thousands — of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals” or “engaging in sexual intercourse with men.”In his messages to the teen, Anderegg seemingly “boasted” about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg “used extremely specific and explicit prompts to create these images,” including “specific ‘negative’ prompts — that is, prompts that direct the GenAI model on what not to include in generated content — to avoid creating images that depict adults.” These go-to prompts were stored on his computer, the DOJ alleged.Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as “transferring obscene material to a minor under the age of 16,” the indictment said.Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are “no conditions of release” that could prevent him from posing a “significant danger” to his community while the court mulls his case. The DOJ warned the court that it’s highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.“He studied computer science and has decades of experience in software engineering,” the indictment said. “While computer monitoring may address the danger posed by less sophisticated offenders, the defendant’s background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected.”If convicted of all four counts, he could face “a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison,” the DOJ said. Partly because of “special skill in GenAI,” the DOJ — which described its evidence against Anderegg as “strong” — suggested that they may recommend a sentencing range “as high as life imprisonment.”Announcing Anderegg’s arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.“Technology may change, but our commitment to protecting children will not,” Monaco said. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material — or CSAM — no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”
This, of course, raises precisely the complicated legal questions that I had previously brought up on the internet. Obviously, the person in this particular case should have been arrested for grooming kids regardless. However, if something is generated entirely by AI and involves no actual harm to actual children, should it still be legally classified as child pornography? Does classifying AI-generated material as child pornography potentially raise a dangerous legal precedent?
First off, I want you to keep in mind that I hate pedophiles just as much as you do. I am by no means defending the kind of sick degenerates who enjoy this sort of material. As anyone who did prison time with me can attest to, when I was in federal prison, my fiery, passionate hatred of sex offenders repeatedly landed me in trouble (in fact, SIS actually shipped me out of Coleman FCC Medium because they determined that I posed a threat to the sex offenders on the yard, and I later became well-known for running sex offenders off of the yard at Terre Haute FCI). With that said, I think that it is certainly worth closely inspecting the legal grey area surrounding AI-generated imagery of this sort.
One may bring up that it is illegal to digitally edit pictures of children into pornography, and one may equate that to AI-generated child pornography. However, in the case of the former, there is an actual child whose likeness is being manipulated to place that child into child pornography— which does cause actual harm to an actual child, albeit not as much harm as if a pedophile were to actually make real porn of that child. Not many will dispute that editing real children into explicit porn images should be illegal (although even those laws have been egregiously abused, such as when YouTube comedian Evan Emory was charged with manufacturing child sexual abuse material for using editing tricks in one of his videos to make it look like he was singing a sexually explicit song to a classroom of kids). However, that isn’t what this is. This is AI generating entirely fictional images of entirely fictional children who don’t even exist. As such, there is no actual harm being done to any actual children.
AI image generation tools can only generate images based on data that is fed into the software. Ergo, if these images were not generated in a way that involved feeding the AI system actual child pornography, then I don’t see how it isn’t both legally and ethically questionable to claim that the AI can generate actual criminal child pornography. There is no victim, there are no kids in illegal situations, and there is no way to verify the age of the nonexistent “kids” in the images… so how, exactly, can this legally be considered child pornography? Take any real child porn image in existence and law enforcement can often point to the image and say “that kid’s name is X and he/she is X years old.” If they can’t, it’s only because they simply don’t know the identities of the kids in the images (who are still very much real kids). However, since there are no actual kids here, there is no possible way to identify any real kids here. To abuse AI in the way that this man did is certainly reprehensible, to say the least (although not even remotely unexpected), but, even so, I simply don’t like the precedent that this is going to set and how easily this could be abused by law enforcement for other kinds of prosecutions. As previously stated, this man indubitably tried to groom a kid and belongs in prison for that alone, but I just don’t feel comfortable prosecuting him for the images given the circumstances.
I am fully aware that lolicon/shotacon (which, if you don’t know, is anime/manga child pornography, the former of little girls and the latter of little boys) is illegal in many countries — and, as much I personally find lolicon/shotacon to be absolutely disgusting, I think that criminalizing cartoons sets a very dangerous legal precedent and is profoundly misguided. In his 2008 piece “Why Defend Freedom of Icky Speech?”, renowned author Neil Gaiman eloquently lays out why he opposes legal bans on lolicon/shotacon:
The Law is a blunt instrument. It’s not a scalpel. It’s a club. If there is something you consider indefensible, and there is something you consider defensible, and the same laws can take them both out, you are going to find yourself defending the indefensible.
Many decades earlier, H.L. Mencken had similar words regarding the fight for civil liberties: “The trouble with fighting for human freedom is that one spends most of one’s time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.” In order to nip the iron fist of the government in the bud, one is often forced to defend the indefensible. Attempts to take away civil liberties are always framed as fighting some social ill so that anyone who opposes the attempt to take away civil liberties can conveniently be framed as supporting said social ill. I do not doubt for a second that many of you will call me a pedophile for raising alarms about the criminalization of fake, AI-generated child abuse imagery.
The US ban on lolicon/shotacon was (rightfully, in my view) struck down as unconstitutional years ago, and Americans can, for better or for worse, legally download all of the anime kiddie porn that they want. However, should AI-generated images that appear to be of real children (but aren’t) still be granted the same First Amendment protections? That’s a very difficult legal and ethical question to answer, and it’s one that I’m sure will come before the Supreme Court in the next few years (and there is really no telling, at this point, how they will rule on the subject).
It is a sad reality of human nature that, no matter what it is, people will always find some way to use it for evil purposes. The potential for abuse when it comes to AI is unlimited. However, so is the potential for abuse when it comes to the brute force of the law. Ultimately, we, as a society, must decide which of those abuses is more dangerous. In this particular case, I firmly believe that the brute force of the law poses a far greater danger than fake, AI-generated child pornography does.