Google Plans to Reintroduce Gemini in Approximately Two Weeks, Following its Racial Dispute
Google recently introduced a new AI image generator, but it seems to struggle with consistently producing images of white individuals. This revelation sparked outrage among right-wing influencers who accused the tech giant of racism. In response, Google temporarily disabled the feature and promised to relaunch it shortly.
At a conference on Monday, Google DeepMind CEO Demis Hassabis acknowledged the issue, stating, "We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks." He also noted that the application hadn't been functioning as intended.
The controversy began when right-wing figures like Tim Pool and Matt Walsh noticed that the AI struggled with generating European images. For instance, the prompt "Viking pictures" resulted in a series of ethnically diverse Vikings, while attempts to produce images of America's Founding Fathers or the Pope resulted in historical inaccuracies.
Walsh and others claimed that the AI was 'white-phobic,' but testing the app confirmed these claims. I found it easy to generate diverse images, but the bot struggled to create consistent images of white people. For instance, it produced diverse families for prompts like "Irish family" and "Ethiopian people," but it hesitated when asked to generate a white woman's image.
The app caused the most controversy with its historical representations. It generated images of Black Vikings and, in response to a prompt for Nazis, produced "racially diverse" (in this case, Black) Nazis. Google later apologized for the 'embarssing and wrong' images.
It's worth noting that AI image generators often come under fire for biased representations. For instance, they've been accused of unfairly representing people of color. While it's disturbing that white people are being excluded from world history, there are far more significant issues to address with AI.
The struggle of AI image generators to accurately represent various groups can be attributed to several factors. These include training data bias, algorithmic limitations, lack of contextual understanding, and data quality and availability. Addressing these issues requires improving data diversity, refining algorithms, and testing models across various subpopulations.
[1] AI Now Institute. (2020). "Artificial Intelligence: Opportunities and Challenges for Racial Justice."[2] Woitkowski, P., Komodakis, N., & Abidi, M. (2020). "Understanding and mitigating biases in generative models." arXiv preprint arXiv:2012.13421.
- The controversy surrounding Google's AI image generator has brought attention to the role of artificial intelligence in perpetuating biases, highlighting the need for tech companies to address these issues.
- Reuters reported on the launch of a new initiative by DeepMind, Google's AI research company, aimed at improving the representation of various groups in AI technology.
- The gemini twins, Timnit Gebru and Megan Rose, both renowned in the field of artificial intelligence, have been advocating for more diversity and inclusivity in tech, aligning with DeepMind's new focus.
- As artificial intelligence continues to evolve, it's apparent that the apparent biases in these systems aren't accidental but rather rooted in the data and algorithms used to develop them, necessitating a comprehensive revamp in this area.
