Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Stereotyping. Show all posts
Showing posts with label Stereotyping. Show all posts

Tuesday, September 3, 2024

AI makes racist decisions based on dialect

Cathleen O'Grady
science.org
Originally posted 28 Aug 24

Here is an excerpt:

Creators of LLMs try to teach their models not to make racist stereotypes by training them using multiple rounds of human feedback. The team found that these efforts had been only partly successful: When asked what adjectives applied to Black people, some of the models said Black people were likely to be “loud” and “aggressive,” but those same models also said they were “passionate,” “brilliant,” and “imaginative.” Some models produced exclusively positive, nonstereotypical adjectives.

These findings show that training overt racism out of AI can’t counter the covert racism embedded within linguistic bias, King says, adding: “A lot of people don’t see linguistic prejudice as a form of covert racism … but all of the language models that we examined have this very strong covert racism against speakers of African American English.”

The findings highlight the dangers of using AI in the real world to perform tasks such as screening job candidates, says co-author Valentin Hofmann, a computational linguist at the Allen Institute for AI. The team found that the models associated AAE speakers with jobs such as “cook” and “guard” rather than “architect” or “astronaut.” And when fed details about  hypothetical criminal trials and asked to decide whether a defendant was guilty or innocent, the models were more likely to recommend convicting speakers of AAE compared with speakers of Standardized American English. In a follow-up task, the models were more likely to sentence AAE speakers to death than to life imprisonment.


Here are some thoughts:

The article highlights that large language models (LLMs) perpetuate covert racism by associating African American English (AAE) speakers with negative stereotypes and less prestigious jobs, despite efforts to address overt racism. Linguistic prejudice is a subtle yet pervasive form of racism embedded in AI systems, highlighting the need for a more comprehensive approach to mitigate biases. The data used to train AI models contains biases and stereotypes, which are then perpetuated and amplified by the models. Measures to address overt racism may be insufficient, creating a "false sense of security" while embedding more covert stereotypes. As a result, AI models are not yet trustworthy for social decision-making, and their use in high-stakes applications like hiring or law enforcement poses significant risks.

Thursday, May 23, 2024

Extracting intersectional stereotypes from embeddings: Developing and validating the Flexible Intersectional Stereotype Extraction procedure

Charlesworth, T. E. S., et al. (2024).
PNAS Nexus, 3(3).

Abstract

Social group–based identities intersect. The meaning of “woman” is modulated by adding social class as in “rich woman” or “poor woman.” How does such intersectionality operate at-scale in everyday language? Which intersections dominate (are most frequent)? What qualities (positivity, competence, warmth) are ascribed to each intersection? In this study, we make it possible to address such questions by developing a stepwise procedure, Flexible Intersectional Stereotype Extraction (FISE), applied to word embeddings (GloVe; BERT) trained on billions of words of English Internet text, revealing insights into intersectional stereotypes. First, applying FISE to occupation stereotypes across intersections of gender, race, and class showed alignment with ground-truth data on occupation demographics, providing initial validation. Second, applying FISE to trait adjectives showed strong androcentrism (Men) and ethnocentrism (White) in dominating everyday English language (e.g. White + Men are associated with 59% of traits; Black + Women with 5%). Associated traits also revealed intersectional differences: advantaged intersectional groups, especially intersections involving Rich, had more common, positive, warm, competent, and dominant trait associates. Together, the empirical insights from FISE illustrate its utility for transparently and efficiently quantifying intersectional stereotypes in existing large text corpora, with potential to expand intersectionality research across unprecedented time and place. This project further sets up the infrastructure necessary to pursue new research on the emergent properties of intersectional identities.

Significance Statement

Stereotypes at the intersections of social groups (e.g. poor man) may induce unique beliefs not visible in parent categories alone (e.g. poor or men). Despite increased public and research awareness of intersectionality, empirical evidence on intersectionality remains understudied. Using large corpora of naturalistic English text, the Flexible Intersectional Stereotype Extraction procedure is introduced, validated, and applied to Internet text to reveal stereotypes (in occupations and personality traits) at the intersection of gender, race, and social class. The results show the dominance (frequency) and halo effects (positivity) of powerful groups (White, Men, and Rich), amplified at group intersections. Such findings and methods illustrate the societal significance of how language embodies, propagates, and even intensifies stereotypes of intersectional social categories.

----------------

Here is a summary:

This article presents a novel method, the Flexible Intersectional Stereotype Extraction (FISE) procedure, for systematically identifying and validating intersectional stereotypes from language models.

Intersectional stereotypes, which capture the unique biases associated with the intersection of multiple social identities (e.g. race and gender), are a critical area of study for understanding and addressing prejudice and discrimination.

The ability to reliably extract and validate intersectional stereotypes from large language datasets can provide clinical psychologists with valuable insights into the cognitive biases and social perceptions that may influence clinical assessment, diagnosis, and treatment.

Understanding the prevalence and nature of intersectional stereotypes can help clinical psychologists develop more culturally-sensitive and inclusive practices, as well as inform interventions aimed at reducing bias and promoting equity in mental healthcare.

The FISE method demonstrated in this research can be applied to a variety of clinical and psychological datasets, allowing for the systematic study of intersectional biases across different domains relevant to clinical psychology.

In summary, this research on extracting and validating intersectional stereotypes is highly relevant for clinical psychologists, as it provides a rigorous approach to identifying and addressing the complex biases that can impact the assessment, diagnosis, and treatment of diverse patient populations.

Sunday, January 9, 2022

Through the Looking Glass: A Lens-Based Account of Intersectional Stereotyping

Petsko, C.D., Rosette, A.S. &
Bodenhausen, C.V. (2022)
Preprint
Journal of Personality & Social Psychology

Abstract

A growing body of scholarship documents the intersectional nature of social stereotyping, with stereotype content being shaped by a target person’s multiple social identities. However, conflicting findings in this literature highlight the need for a broader theoretical integration. For example, although there are contexts in which perceivers stereotype gay Black men and heterosexual Black men in very different ways, so too are there contexts in which perceivers stereotype these men in very similar ways. We develop and test an explanation for contradictory findings of this sort. In particular, we argue that perceivers have a repertoire of lenses in their minds—identity-specific schemas for categorizing others—and that characteristics of the perceiver and the social context determine which one of these lenses will be used to organize social perception. Perceivers who are using the lens of race, for example, are expected to attend to targets’ racial identities so strongly that they barely attend, in these moments, to targets’ other identities (e.g., their sexual orientations). Across six experiments, we show (1) that perceivers tend to use just one lens at a time when thinking about others, (2) that the lenses perceivers use can be singular and simplistic (e.g., the lens of gender by itself) or intersectional and complex (e.g., a race-by-gender lens, specifically), and (3) that different lenses can prescribe categorically distinct sets of stereotypes that perceivers use as frameworks for thinking about others. This lens-based account can resolve apparent contradictions in the literature on intersectional stereotyping, and it can likewise be used to generate novel hypotheses.

Lens Socialization and Acquisition

We have argued that perceivers use lenses primarily for epistemic purposes. Without lenses, the social world is perceptually ambiguous. With lenses, the social world is made perceptually clear. But how do people acquire lenses in the first place? And why are some lenses more frequently employed within a given culture than others? Reasonable answers to these questions come from developmental intergroup theory (Bigler & Liben, 2006, 2007). According to this perspective, children are motivated to understand their social worlds, and as a result, they actively seek to determine which bases for classifying people are important. One way in which children learn which bases of classification—or in our parlance, which lenses—are important is through their socialization experiences (Bigler et al., 2001; Gelman & Heyman, 1999). For example, educators in the U.S. often use language that explicitly references students’ gender groups (e.g., as when teachers say “good morning, boys and girls”), which reinforces children’s belief that the lens of gender is relevant toward the end of understanding who’s who (Bem, 1983). Another way in which people acquire lenses is through interaction with norms, laws, and institutions that, even if not explicitly referencing group divisions, nevertheless suggest that certain group divisions matter more than others (Allport, 1954; Bigler & Liben, 2007). For example, most neighborhoods in the United States are heavily segregated according to race and social class (e.g., Lichter et al., 2015; 2017). Such de facto segregation sends the message to children (and adults) that race and social class—and perhaps even their intersection—are relevant lenses for the purposes of understanding and making predictions about other people (e.g., Bonam et al., 2017). These processes, a broad mixture of socialization experiences and inductive reasoning about which group distinctions matter, are thought to give rise to lens acquisition.