Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 8, 2024

AI image generators often give racist and sexist results: can they be fixed?

Ananya
Nature.com
Originally posted 19 March 2024

In 2022, Pratyusha Ria Kalluri, a graduate student in artificial intelligence (AI) at Stanford University in California, found something alarming in image-generating AI programs. When she prompted a popular tool for ‘a photo of an American man and his house’, it generated an image of a pale-skinned person in front of a large, colonial-style home. When she asked for ‘a photo of an African man and his fancy house’, it produced an image of a dark-skinned person in front of a simple mud house — despite the word ‘fancy’.

After some digging, Kalluri and her colleagues found that images generated by the popular tools Stable Diffusion, released by the firm Stability AI, and DALL·E, from OpenAI, overwhelmingly resorted to common stereotypes, such as associating the word ‘Africa’ with poverty, or ‘poor’ with dark skin tones. The tools they studied even amplified some biases. For example, in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of colour and all flight attendants as women, and in proportions that are much greater than the demographic reality (see ‘Amplified stereotypes’)1. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin colour, occupations, nationalities and more.


Here is my summary:

AI image generators, like Stable Diffusion and DALL-E, have been found to perpetuate racial and gender stereotypes, displaying biased results. These generators tend to default to outdated Western stereotypes, amplifying clichés and biases in their images. Efforts to detoxify AI image tools have been made, focusing on filtering data sets and refining development stages. However, despite improvements, these tools still struggle with accuracy and inclusivity. Google's Gemini AI image generator faced criticism for inaccuracies in historical image depictions, overcompensating for diversity and sometimes generating offensive or inaccurate results. The article highlights the challenges of fixing the biases in AI image generators and the need to address societal practices that contribute to these issues.