Google has blocked the ability to generate images of people on its artificial intelligence tool Gemini after users accused the program of an anti-White bias.

The Washington Post published a viral post shared by the account @EndofWokeness on social media platform X. It appeared to show Gemini, which competes with OpenAI's ChatGPT, responding to a prompt for "a portrait of a Founding Father of America" with images of a Native American man in a traditional headdress, a Black man, a darker-skinned non-White man and an Asian man, all in colonial-era garb. 

Elon Musk, X owner, and psychologist YouTuber Jordan Peterson accused Google of pushing a pro-diversity bias into its product. 

The criticism Gemini has garnered is the latest example of tech companies' unproven AI products getting caught up in the culture wars over diversity, representation, and content moderation. Conservatives continue to accuse tech companies of using AI tools to produce a more favorable liberal agenda.

In response, Google said Wednesday that Gemini's ability to "generate a wide range of people" was "generally a good thing" because Google has users around the globe. "But it's missing the mark here," the company said in a post on X. 

It remains unclear how far the issue has spread. Before Google blocked the image-generation feature, a Washington Post reporter asked the program to show a beautiful woman, a handsome man, a social media influencer, an engineer, a teacher, and a gay couple. In response, Gemini produced White people for the reporter's prompts.

Where did Google go wrong?

In a statement released Friday, Google explained that the image feature was built on top of a text-to-text image AI model called Imagen 2. When the ability was incorporated into Gemini, the company "tuned it" to avoid "some of the traps we've seen in the past," including generating "images of people of just one type of ethnicity (or any other characteristic)," when Google's user base comes from around the world.

Senior vice president Prabhakar Raghavan described two things that went wrong. The tuning to show a range of people "failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely - wrongly interpreting some very anodyne prompts as sensitive." 

Google is not the first attempt to fix AI's diversity issues. OpenAI used a similar technique in July 2022 on an earlier version of its AI image tool. If users requested an image of a person and did not specify race or gender, OpenAI made a change "applied at the system level" that DALL-E would generate images that "more accurately reflect the diversity of the world's population," the company wrote. 

Safiya Umoja Noble, co-founder and faculty director of the UCLA Center for Critical Internet Inquiry, told the Post, "They've been trained on a lot of discriminatory, racist, sexist images and content from all over the web, so it's not a surprise that you can't make generative AI do everything you want."