Google announced on Thursday that it will be temporarily halting its Gemini artificial intelligence chatbot from generating images of people after facing criticism for producing historically inaccurate depictions. This decision comes after users posted screenshots on social media showing racially diverse characters inserted into historically white-dominated scenes by the AI model.
The tech giant acknowledged the issues with Gemini's image generation feature and stated that they are already working to address the inaccuracies. Google emphasized that they are pausing the generation of people's faces and will release an improved version soon. The company also mentioned that Gemini can generate a wide range of people but admitted that it is currently "missing the mark."
University of Washington researcher Sourojit Ghosh, who has studied bias in AI image-generators, expressed support for Google's decision to pause the generation of people's faces but also highlighted the complexity of addressing broader harms posed by image-generators built on generations of photos and artwork found on the internet. Ghosh emphasized that finding a solution to representational harm caused by AI models requires more than just technical patches.
Google's Gemini AI model has been under scrutiny for inserting people of color into historical contexts where they would not historically appear. The AI generated illustrations of nonwhite individuals in Nazi uniforms when asked to create images of a 1943 German soldier, as well as nonwhite Founding Fathers and U.S. senators from the 1800s, who were historically all white men. This racially diverse output has raised concerns about representation and racial bias within AI models.
In a similar vein, OpenAI, the company behind AI text generator ChatGPT, also faced issues this week with unexpected responses from its flagship AI tool. The company resolved the bug causing the malfunction, which resulted in ChatGPT producing nonsensical sentences instead of its usual output. OpenAI explained that the bug was in the step where the model chooses numbers to generate responses, leading to word sequences that made no sense.
Overall, the incidents involving Google's Gemini and OpenAI's ChatGPT highlight the ongoing challenges in developing inclusive and accurate AI models that reflect the diversity of society without perpetuating harmful biases. Both companies are working to improve their AI tools and address the issues raised by users and researchers.
themes: Internet ChatGPT OpenAI USA Google AI (Artificial intelligence)