AI Is a Mass-Delusion Event
Three years in, one of AI’s enduring impacts is to make people feel like they’re losing it.
Listen to more stories on the Noa app.
It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager—Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida—has been reanimated by generative AI, his voice and dialogue modeled on snippets of his writing and home-video footage. The animations are stiff, the model’s speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I’m losing my mind watching it.
Enjoy a year of unlimited access to The Atlantic—including every story on our site and app, subscriber newsletters, and more.
Become a SubscriberJim Acosta, the former CNN personality who’s conducting the interview, appears fully bought-in to the premise, adding to the surreality: He’s playing it straight, even though the interactions are so bizarre. Acosta asks simple questions about Oliver’s interests and how the teenager died. The chatbot, which was built with the full cooperation of Oliver’s parents to advocate for gun control, responds like a press release: “We need to create safe spaces for conversations and connections, making sure everyone feels seen.” It offers bromides such as “More kindness and understanding can truly make a difference.”