
Google Gemini Faces Accusations of Racism Against White People

Google’s new artificial intelligence model, Gemini 1.5, has been making headlines with its visual creation feature. However, the model has already sparked controversy due to the race of the human images it produces.
Recently released, Gemini 1.5 has been quickly adopted by various users. However, many have noticed that the AI-generated images of people tend to be non-white. This has raised concerns about the model’s apparent bias against creating images of Caucasian individuals.
Gemini 1.5 is designed to create images in response to user requests, but many have claimed that the AI bot consistently avoids generating images of white individuals. This has led to accusations of racism against Google, prompting a heated debate about the implications of the AI model’s behavior.
As users continue to explore the capabilities of Gemini 1.5, questions remain about the potential biases embedded within the technology. Google has yet to respond to these allegations, but the controversy surrounding Gemini 1.5 serves as a stark reminder of the ongoing challenges in developing and implementing fair and unbiased artificial intelligence.





