Latitude Games’ AI Dungeon was changing the face of AI-generated content
When the gaming startup Latitude launched their text adventure game AI Dungeon in late 2019, it exploded. A hundred thousand people played in that first week. Six weeks later, they had over 1 million players and more than 6 million stories. The company had built a loyal, active user base almost overnight― and was bursting with activity. It was proving to be an exciting step forward for AI-generated content.
But when Latitude dropped their latest update in April 2021, a move that would prevent content related to child sexual abuse material, the same community that had rallied around the game turned against it―and they were vocal. The game’s Google Play Store rating, which was once as high as a 4.8, dropped down to a 2.6 and is lowering every day. Now the AI Dungeon subreddit is filled with memes dragging the company and the game, and the AI Dungeon Discord server even temporarily shut down as a result of overwhelmed moderators.
As Latitude’s website proclaims: “like the Wizard of Oz, AI is the magic behind the curtain.” But like the Wizard of Oz, we just might be shocked by what we find when we pull back the curtain.
AI Dungeon: like Dungeons & Dragons, but with less dice
At its core, AI Dungeon is an innovative, creative tool for storytelling. A text-based adventure game similar to Dungeons & Dragons (D&D), the artificial intelligence acts as the “dungeon master,” taking you along for a unique adventure. You put in your actions and the AI generates a result. Together, you tell a story―and the possibilities are limitless.
When I spoke with Nick Walton, co-founder and CEO of Latitude, he painted me a picture of a dragon and a pastry. “You might go up to a dragon and hand him a pastry… And maybe you go into business with the dragon and you open up a bakery and just any story you can imagine is possible.” And there are no end credits. The story you tell with the AI goes on for as long as you want it to. “AI Dungeon is very much like AI-powered interactive fiction. It doesn’t have the structured progression [like a D&D game], it doesn’t have skills and abilities. It’s all about the story.”
In fact, it’s more like a word editor with predictive text—the stories you write in the game are akin to writing a book―and your co-author is artificial intelligence. Prior to April, a player could even share their stories with the game’s Explore feature, which lets players read the adventures of other players. It proved to be an endless source of unique content, with more than 70,000 public scenarios generated by other players available.
“The thesis of the company is [that] AI is going to transform how we make games, and we’re really excited to be leading the way,” says Walton. What had started as a hackathon project was quickly becoming something that could disrupt the future of the gaming industry.
Like Dungeons & Dragons, but with (even) more porn
As with all exciting new projects, AI Dungeon has a dark side. In fact, I discovered it myself on my very first play. While “safe mode” is enabled automatically, ensuring that all content generated by the AI is PG, once you remove it, it’s all porn all the time. I won’t get into the hairy details, but what started as my fun adventure as a princess rescuing my kingdom quickly turned into something extremely graphic, and I was worried it would get me called into HR for doing on company time.
When I asked Walton about that aspect of the game, he was well aware of what I was talking about. “It’s not something we expected,” he says. “The thing that’s very different about AI and traditional games, [with] traditional games, you start with nothing and you have to program every possible thing. AI is actually the reverse, where you start out with everything possible and you have to figure out how to add constraints to it, which is a challenging problem.”
It makes sense. When the AI is learning from anonymous people online, it’s bound to be fed an unlimited amount of not-safe-for-work (NSFW) content. But Walton also defended the freedom and creativity that it gives users. “One of the powers of the game is this freedom, right? It’s the freedom where you can do anything.” And he means it when he says anything.
In fact, it’s the pornographic element that seems to keep many players coming back for more. For example, the AI Dungeon subreddit is filled with screenshots of the raunchy scenarios the AI generates. And many of the shared stories in the game’s Explore feature (currently disabled) involve pornographic content of a more niche taste.
Most of it comes from the AI, by the way, not the players. While the player might prompt it, it’s the computer that comes up with the detailed response. And while Walton didn’t confirm that porn was what most of its more than 1 million users were using the platform for, it’s clear that it became a main draw for players.
When I spoke with Walton in February, he told me that the excessive pornographic content was something they were working on—how to constrain the AI more and prevent that type of content. But they didn’t have the answers yet. Still, he tells me they had no intention of policing what went on in their players’ stories.
“Our motto is we’re really focused on, not necessarily trying to clamp down on things as much as give users the ability to avoid types of content they don’t want [to see],” he says. But the April update proved his hypothesis wrong―and it has the user base up in arms.
Like Dungeons & Dragons, but with oversight
AI Dungeon is powered by Generative Pre-trained Transformer 3 (GPT-3), the third-generation language prediction model created by the San Francisco company OpenAI. OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. But when the company learned that AI Dungeon players were using their model to generate what could be classified as child sexual abuse material, they couldn’t just stand by and let it happen. In a statement to Wired, OpenAI CEO Sam Altman said, “Content moderation decisions are difficult in some cases, but not this one. This is not the future for AI that any of us want.”
So on April 26th, after the team at OpenAI shared their concerns about content that violated their terms of service, Latitude took action. They ran a test system that would prevent the generation of sexually explicit content that may involve depictions or descriptions of minors. However, no mention of this test system was communicated to players. And while this behavior isn’t uncommon for Latitude, as the platform is constantly changing, it didn’t go unnoticed.
As soon as the test system was run, players began getting an error that said, “The AI doesn’t know what to say,” on some of their NSFW stories. And with no official answers, users began speculating about censorship and privacy concerns, while quickly discovering that it was only affecting stories with minors and bestiality depicted in them.
But it’s not necessarily the restriction on child sexual abuse material itself that has players up in arms over the update. It’s the censorship itself.
Like Dungeons & Dragons, but now with censorship (it’s a feature, not a bug)
“[The] problem is the creeping nature of such censorship. While a ban on underage content today might be perfectly acceptable to almost everyone, it is difficult to get invested in a product whose enforcement of ‘already established community guidelines’ will be increasingly and arbitrarily expanded over time,” writes Wolfe#7442 in the Discord server on April 26th.
In a blog post released the next day, Latitude made an official statement addressing the test system and the misinformation that had spread across social media due to their silence. And the statement clearly lays out the content that’s now being flagged for review:
“This test is focused on preventing the use of AI Dungeon to create child sexual abuse material. This means content that is sexual or suggestive involving minors; child sexual abuse imagery; fantasy content (like ‘loli’) that depicts, encourages, or promotes the sexualization of minors or those who appear to be minors; or child sexual exploitation.”
But the problem that users are encountering is that it’s more than just the previously listed content that’s being flagged. For example, mentioning something as inoffensive as an eight-year-old laptop is triggering the filter. And it’s not just public stories that are being flagged either. Private stories intended only for their creators’ eyes are subject to review as well. That’s what really has the user base upset and asking questions about how their stories are stored and shared within the company.
While many users are understanding of the change, many more aren’t. What would happen over the next few days was just as explosive as when Latitude first launched the game. Community reactions across Twitter, Reddit, and Discord were irate, and players actively began to tank the game’s reviews in app stores and search engines. The discontent was echoed back and forth across the unofficial AI Dungeon Discord, the discussion so active that it forced moderators to effectively shut down the server so that they had time to sleep. And as of writing this article, three weeks later, Latitude has done nothing to address the community’s concerns.
Meanwhile, new companies like NovelAI have popped up since the latest Latitude announcement. Using GPT-Neo, an open-source GPT model outside of OpenAI’s oversight, NovelAI promises to do right by their users what AI Dungeon has done wrong. And as AI Dungeon users get angrier over Latitude’s lack of response, companies like NovelAI are seeing an influx of thousands of new prospective users as they unsubscribe from AI Dungeon, looking for a viable alternative when no competing companies currently exist.
In fact, I watched the NovelAI Discord server jump from 0 members to over 6,000 over the course of just a few days since Latitude’s initial announcement, and the project hasn’t even gone into alpha yet.
Latitude has yet to escape this controversy. While they promise to continue to support NSFW content, what else may be censored in the future remains to be seen. Utah Business reached out to Walton and the Latitude team, but since their initial blog post, the company has remained silent. Now, the AI Dungeon community is at a standstill as they wait for the dungeon master to respond. Roll. For. Initiative!
Sorry, the comment form is closed at this time.
papi chulo
papi chulo poor news on latidue
Comment about Latitude's finetuning data
“I won’t get into the hairy details, but what started as my fun adventure as a princess rescuing my kingdom quickly turned into something extremely graphic”
… This document/exposé on the finetuning data Latitude used can explain that. https://gitgud.io/AuroraPurgatio/aurorapurgatio#aurorapurgatio
^ Agree, too much troubling content
Yup, i think that sums it up as well. Many people actually came in for the rpg, but had struggles with having any meaningful plot. It tends to forget details, repeat, and also forget people’s ages, identities as well. lots of the training data was problematic, such as featuring: Child abduction, B: Literal child predators as reoccurring characters. (Count Gray), and it was their own data and they trained all models on it.
They’re blaming the users for something the user pressed enter and watched. It’s like going in to a kid’s movie and seeing a risque inappropriate moving playing, walking out and then getting arrested for watching it generate it. The ai generates the CP stuff on it’s OWN, UNPROMPTED. if anything the User base is like a bunch of people that just want consensual white listed stuff and maybe a little furry stuff or orc and elf stuff going on.
Yes, it is risque, but my first experience was a unprompted werewolf fight where my attempts to pull out a normal sword got the werewolf to pull out it’s ‘own’, gleaming sword, entirely unprompted by the ai. Players are angry that they’re being judged for stuff the ai generates trying to walk on eggshells and landmines and deciding it’s not worth sticking with a company with a hypocritical double standard that trained a ai that generates CP but yet can’t discern it’s own output from the user.. EVEN THOUGH we LITERALLY send them IT. I wouldn’t even be surprised if even some of the worst stories out there is just a person loading it and pressing enter the whole time, never even saying a single word and just watching what the ai prints.
Anon
Ayyyyyy pretty good summation of events. Well written. Glad to see it.
Fox Tobin
It’s not just the censorship, it’s how that censorship has reduced a $120/yr subscription to a buggy mess in pursuit of preventing people with disturbing fantasies from engaging with the software. There’s no doubt that very valid legal liability concerns directed this, but the point remains that if you spend $120/yr on something, you want to get your money’s worth. Nowadays, even the most milquetoast of stories I’ve tried rapidly break down, first going nowhere, then going random places, and then the AI stops responding.
But about the censorship – and ignoring the theory that suppression is more harmful than allowing safe and healthy outlets for dark fantasies – Censoring underaged content is bog standard content moderation. Every other platform tries to keep ahead of the flood. Every other platform makes it pretty much invisible. If policing such a well understood and well documented category of illicit content requires lobotomizing the AI and reducing all functionality for months on end without reducing price, then I shudder to imagine what’ll happen when major brands and content owners start looking at their IP being churned through the platform’s AI and decide it needs to be moderated as well.
Lurker101
This is about far more than censorship. This is about an outright breach of confidence and communication from the company and the community. Latitude promised to be transparent with the community and to never read private stories. Then they revealed that they had been reading private stories, forced in a barely cobbled together filter and refused to communicate with the community ever since, except to bang out a barely coherent set of rules. Combine that with the way they’ve completely borked the Dragon AI since January, and it’s been getting dumber by the day because have no idea how to fine tune their backend and the community is completely done with Latitude. We’ve all moved on to NovelAI.
Amogus
Very good and surprisingly faithful to events. Pretty much the only things missing are the filters transforming automatic bans, and the story leaks. Still, great article.
User
Yeah, exactly, the 95-99% false positive rate, and treating it’s customer base as criminals when most of the user base was trying to avoid the stuff the ai generated unprompted was ridiculous. Look into the stuff it was trained on. Some of the most lovable and reoccurring characters like your Count Gray and Kyros came from child predator fanfics or materials with other highly questionable content. If ANYTHING, users were FRUSTERATED at trying to avoid all the CP the ai was generating and getting blamed and banned for the ai’s OWN outputs or de aging!
Rip a good game
You also forgot to add in the content the AI itself was trained on, Which included stories of nsfw content involving children, Both violent and Sexual, As well as transphobic and homophobic content. The AI itself was trained on the things they are banning, and not banning well. Can easily be found on the r/AiDungeon reddit in the comments as a reply to questions about everything going on. This is about far more than censoring the despicable content they are, I am all for censoring and banning that filth. But the AI was trained on that filth, Their filter is broken, And they have been violating privacy and oursourcing stories.
^ Yup, Count Gray was trained off a Child Predator fanfic
Yup, This as well.
Nick
Another thing the users are upset about is that private stories were accessible through a public facing endpoint, due to a bug. The company never commented on what data was revealed with this security bug. But it is how the users found out about the new monitoring in the first place.
Ben
The biggest problem with AI Dungeon is that the fine tuned model the game runs on was trained with vivid gore, rape, pedophile content. The problem with the filter is that AI Dungeon will eventually produce this grotesque content on its own and the filter only bans specific words, not the context based approach the AI runs on, these banned actions still leak though and can get you flagged in perfectly sfw stories, which will have all your personal information looked through. It’s like deleting the English words, ‘7 year old boy’ from a pedophiles brain and expecting that to fix everything.
The true solution to this problem would be retraining the AI without the problematic content in the training set. This, however, would be extremely expensive and so they have opted to put this bandaid on instead to save face without spending a cent.
Semra
This actually isn’t true, at least for the earlier versions. The stories it was originally trained on are public and free on a site that bans explicit porn and fetish material. There is no pedophilia in the training material, we’re talking about a site that children use for school projects and one of the safer writing sites that allow public submissions out there thanks to the zealousness of the mods. The stories are mostly epic “grimdark” fantasy written by Endmaster and don’t top an R rating, but the full list of stories used was made available there by Nick she afaik are still accessible. (Some of the other authors were actually pretty annoyed he’d used their work without permission, but Endmaster have him permission after the fact.)
The pornographic content of AI Dungeon became much more pronounced in later versions….while it’s possible Nick was pulling indiscriminately from other sources by then, I really got the impression the AI was looking at the kinds of things some (many) users were doing and incorporating it unasked into other people’s games.
Sad but True
I don’t think that anyone was debating what the original content of the Training data was. But the fact that there had been these stories involving rape, child murder, And not just nsfw child content but forced underage pregnancy in some cases, that is absolutely in the training data now. There is no debating what has been found with that, in fact the very frequent characters like Count Grey and Kryos are from some of the terrible stories.
Me
The fact people can’t recognize incredibly epic fiction that grips the emotions shouldn’t be held against them, Semra. Not everyone can handle things like character development, choice, and memorable villains. There are people who find even the blandest pre-chewed designed by committee corporate mush
to be too spicy…people who probably genuinely think the Marvel movies represent the height of entertainment. Just look at what US state most of the people reading this site are living in and think about it a moment. There are adults in their 30s and 40s who are browsing with profanity filters fearfully switched on as we speak.
It’s not their fault. It’s not their fault.
Rachel S
No, I agree with Ben. The only stories more disgusting and full of violence, pedophilia and rape that AI Dungeon COULD have been trained on would have been those found in the Bible itself. Thank Smith we all dodged that bullet.
That's literally impossible.
The AI does NOT learn from user input.
Latitude failed to tell users about security breech
Not sure if anyone mentioned this, but Latitude’s servers were hacked about 10 days before they put in the filter and Latitude failed to tell anyone about it (Source: https://github.com/AetherDevSecOps/aid_adventure_vulnerability_report )
Richard
I’m sure the community would love it if it just kept underage content out. I personally used it for text adventures. There are many problems that plague this that mainstream media doesn’t cover.
The fact that the automoderation will report you if you just say “my f***ing 3 year old laptop just crashed” because you put the f word and “3-year-old” in the same story.
Not to mention the AI itself can output some questionable text. Because despite what Nick wants you to believe, THE AI ITSELF WAS TRAINED ON DATA THAT HAD SOME HIDEOUS ILLEGAL CONTENT. And your entire account, your every story, can be put under review for something the AI itself generates, not you!
What happens afterward is that, despite Nick saying it gets handled securely within the company, there have been people able to read others’ stories in plain text. Also moderation is not handled within the company, but by a third party that crowdsources moderation. It is not a secure platform.
Add to the headaches that whenever people were trying to unsubscribe, Latitude would not accept cancellations via the website’s cancel button. Instead you had to write to their support. This seems like a desperate attempt to hold on to every cent they can find.
Even worse! They seem to have been lowering the quality of the text generation for their premium model. The nature of how it generates the text is basically it requires more processing power to form more coherent responses, so to save money, they can just start lowering the quality just so they can increase their profit margins. And this is right after they increased the price of the subscription.
The entire company is just slimy. Most of these events all happened within the span of one or two months. And yet when you criticize them, it seems they want to label you as some sort of criminal, even if you’ve never touched the adult material at all.
Don’t give these liars any spotlight.
Richard
In summation, it has a 1.9 stars on Google Play, mainly because the company behind it is slimy and has been betraying users’ trust corner after corner after corner after corner. Even the ones that never even touched the adult stuff.
We did it partick we saved the city
You also didn’t mention the detail of if your story gets flagged they send it someone to review it. I have a bit of a problem with this because I feel like i should only be the part that is flagged and not the whole thing. At this point they have pretty much banned all violence for some reason. But this was a good summary of what the hell happened.
Genie
I’m so confused where all these claims in the comments about “HORRIBLY ILLEGAL” source material are coming from, I read most of the CYOAs the game was originally trained on when I was 13 lol. It’s grimdark fantasy about orcs and necromancers and dark elves and things, not kiddy porn.
For that matter the versions of AI Dungeon I played when it originally came out was smarter and better and not full of weird porn. At some point later that stuff was added in.
Lol
Nick handled this whole thing with terrible incompetence since the beginning. He made AI Dungeon using a bunch of stories he scraped off a writing site, never got permission from any of the dozen or so authors or even informed them of what he was doing. Then he put THEIR work on GitHub and said it was free for anyone to use.
Some of what he grabbed by the way that the AI latched onto was WH40K fanfic, meaning he was charging for material owned by Games Workshop. And most of the other stuff was clearly marked as mature rated by the authors, how can anyone now act surprised?
William R
Great article! Shame it had to end this way, but if nothing else AI Dungeon did introduce me to the wider world of interactive fiction.
JH
Gotta love these folk attempting to claim they weren’t making a bunch of degenerate stories on AI Dungeon.
Nick and his brother are obviously incompetent and intrusive idiots, but the ones that were playing their dumb game are now playing the babe in the woods routine of “Oh Noes! I didn’t know anything!”
Please. Most of you were making degenerate fetish porn on the thing and now you’re salty about being exposed. Not surprising the AI was doing what it was doing given what you degenerates were putting into it.
Play stupid games, win stupid prizes.
Billy
I can’t help but to feel that people do not understand why we have child abuse laws. They are not in place because most (all?) find it disgusting. They are in place because of the irreversible and catastrophic harm it causes them over the course of the entire lives. AI games do not have a victim to abuse, and thus, it should not be cause to censor it. The one exception is if we can establish a link between this sort of thing and it contributing to pedophiles acting upon that fantasy in real life. The data appears to be undecided the last time I looked into it. Few reputable researchers want to come within ten feet of this issue.
CTN
The biggest reason people are actually mad at latitude is because the censor for a while there flagged everything even just the word little in any context was flagged, not to mention they took away the ability to upload stories, talk to others, or browse public stories altogether. Many of the stories already published are also quickly becoming unusable because if they’ve been updated since April they’re no longer playable.