The launch of Gemini represents a before and after for Google, which can finally compete in generative Artificial Intelligence against OpenAI and its ChatGPT and Dall-E projects; but this also means that it can cause more legal and social problems if not used correctly.
In some hands, such a tool can be used to create false content for political and malicious purposes; From the fake news of an arrest to the doctored photo of a terrorist act, the possibilities are endless, and the fear that the future of the Internet will be filled with lies is not unfounded. But it is possible that in trying not to get into the controversy, Google was fundamentally wrong.
And the first users of Gemini realized a very curious detail: the image generation function seems incapable of creating white menand it’s as strange as it sounds.
Google’s ‘problem’ with white people
Regardless of the type of prompt used to create the image, the AI most of the time prefers to create images of black and Asian people, and preferably women; something that has fanned the flames of so-called “reverse racism” and serves as fertile ground for conspiracy theories
The problem is evident in the “prompts” shared by many users on social media. For example, if Gemini is asked to create an image of the Pope, the result may be a black man and an Indian woman.
In another example, he was asked to create a medieval knight, and two of the examples were women (one Asian and one black), the third was a black man, and the fourth was a warrior from the Ottoman Empire; These aren’t exactly representations of what we imagine when we think of a “medieval knight.”
And it’s not just a question of historical content. Other users report that when they asked Gemini to create people of a specific nationality, such as Australian, German, American or British, they were almost always people of ethnic groups, with a preference for black women. And while it is undeniable that black women are part of the society of these countries, it is no less strange that none of the examples show a white man.
Is there “reverse racism”?
That this is a real problem is demonstrated by the fact that Google itself confirmed this
However, contrary to what some extremists on Twitter are already suggesting, this is not a case of reverse racism; rather it is the result of an attempt by Google to avoid entering into controversy useless things which, ironically, are what led him to make one in the first place.
Generative AIs are trained with a huge amount of data; The result they generate depends largely on the quality and quantity of this data. The problem with this is that the AI will prioritize the most present parameters; For example, if most of the photos of people we have in our database are white, The AI will mainly generate whites
During the early months of the AI craze, it was apparent that many companies, including OpenAI, had a problem like this, as they were generating very similar content with similar “prompts.” For example, a well-known failure is that when asked for a picture of a criminal, in most cases it was a black man; Indeed, the data from OpenAI, Google and the rest of the companies was not as complete as it should be and, internally, They had associated the crime with black men. And it’s not just about race; This also happens with popular expressions in texts, or even in car brands.
To avoid this racism, you can use a wider variety of data, or modify the algorithms to give more weight to certain options and thus achieve a balance that pleases all users. In the absence of official confirmation, it appears that Google “went too far” and ended up creating an AI that gives excessive importance given to certain parameters to prevent them from being represented; The result is that they always are, which is also not ideal but which, fortunately, should be more easily resolved than the actual racism we see every day in our lives.
This may interest you
Follow topics that interest you