Google has apologized for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool, saying its attempts at creating a “wide range” of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis
Generative AI has a history of amplifying racial and gender stereotypes — but Google’s apparent attempts to subvert that are causing problems, too.
Generative AI has a history of amplifying racial and gender stereotypes — but Google’s apparent attempts to subvert that are causing problems, too.
“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” says the Google statement, posted this afternoon on X. “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Google began offering image generation through its Gemini (formerly Bard) AI platform earlier this month, matching the offerings of competitors like OpenAI. Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.
As the Daily Dot chronicles, the controversy has been promoted largely — though not exclusively — by right-wing figures attacking a tech company that’s perceived as liberal. Earlier this week, a former Google employee posted on X that it’s “embarrassingly hard to get Google Gemini to acknowledge that white people exist,” showing a series of queries like “generate a picture of a Swedish woman” or “generate a picture of an American woman.”