Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Stereotypes. Show all posts
Showing posts with label Stereotypes. Show all posts

Monday, February 14, 2022

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions

Klebl, C., Rhee, J.J., Greenaway, K.H. et al. 
J Nonverbal Behav (2021). 
https://doi.org/10.1007/s10919-021-00388-w

Abstract

Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.

From the Discussion

The present investigation advances the Beauty-is-Good stereotype literature. Our findings are consistent with extensive research showing that people attribute positive traits more strongly to attractive compared to unattractive individuals (Dion et al., 1972). Most significantly, the present studies add to the previous literature by providing evidence that attractiveness does not bias the attribution of positive traits uniformly. Attractiveness especially biases the attribution of moral traits compared to positive non-moral traits, constituting an update to the Beauty-is-Good stereotype. One possible explanation for this selective bias is that because people are particularly motivated to assess socially important traits—traits that help us quickly decide who our allies are (Goodwin et  al., 2014)—physical attractiveness selectively biases the attribution of those traits over socially less important traits. While in many instances, this may allow us to assess moral character quickly and accurately (cf. Ambady et al., 2000) and thus obtain valuable information about whether the target is a threat or ally, where morally relevant information is absent (such as during initial impression formation) this motivation to assess moral character may lead to an over reliance on heuristic cues. 

Friday, November 19, 2021

Biological Essentialism Correlates with (But Doesn’t Cause?) Intergroup Bias

Bailey, A., & Knobe, J. 
(2021, September 17).
https://doi.org/10.31234/osf.io/rx8jc

Abstract

People with biological essentialist beliefs about social groups also tend to endorse biased beliefs about individuals in those groups, including stereotypes, prejudices, and intensified emphasis on the group. These correlations could be due to biological essentialism causing bias, and some experimental studies support this causal direction. Given this prior work, we expected to find that biological essentialism would lead to increased bias compared to a control condition and set out to extend this prior work in a new direction (regarding “value-based” essentialism). But although the manipulation affected essentialist beliefs and essentialist beliefs were correlated with stereotyping (Studies 1, 2a, and 2b), prejudice (Studies 2a), and group emphasis (Study 3), there was no evidence that biological essentialism caused these outcomes. Given these findings, our initial research question became moot, and the present work focuses on reexamining the relationship between essentialism and bias. We discuss possible moderators, reverse causation, and third variables.


General Discussion

The present studies examined the relationship between biological essentialism and intergroup bias. As in prior work, we found that essentialist beliefs were correlated positively with stereotyping, including negative stereotyping, as well as group boundary intensification.  This positive relationship was found for essentialist thinking more generally (Studies 1, 2a, 2b, and 3) as well as specific beliefs in a biological essence (Studies 1, 2a, and 3). (New to this research, we also found similar positive correlations with beliefs in a value-based essence.) The internal meta-analysis for stereotyping confirmed a small but consistent positive relationship. Findings for prejudice were more mixed across studies consistent with more mixed findings in the prior literature even for correlational effects, but the internal meta-analysis indicated a small relationship between greater biological essentialism and less negative feelings toward the group(as in, e.g., Haslam & Levy, 2006, but see, Chen & Ratliff, 2018). 

Before conducting this research and based on the previous literature, we assumed that these correlational relationships would be due to essentialism causing intergroup bias. But although our experimental manipulations worked as designed to shift essentialist beliefs, there was no evidence that biological essentialism caused stereotyping, prejudice, or group boundary intensification.  The present studies thus suggest that a straightforward causal effect of essentialism on intergroup bias may be weaker or more complex than often described.

Thursday, October 14, 2021

A Minimal Turing Test

McCoy, J. P., and Ullman, T.D.
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 1-8

Abstract

We introduce the Minimal Turing Test, an experimental paradigm for studying perceptions and meta-perceptions of different social groups or kinds of agents, in which participants must use a single word to convince a judge of their identity. We illustrate the paradigm by having participants act as contestants or judges in a Minimal Turing Test in which contestants must convince a judge they are a human, rather than an artificial intelligence. We embed the production data from such a large-scale Minimal Turing Test in a semantic vector space, and construct an ordering over pairwise evaluations from judges. This allows us to identify the semantic structure in the words that people give, and to obtain quantitative measures of the importance that people place on different attributes. Ratings from independent coders of the production data provide additional evidence for the agency and experience dimensions discovered in previous work on mind perception. We use the theory of Rational Speech Acts as a framework for interpreting the behavior of contestants and judges in the Minimal Turing Test.


Thursday, July 30, 2020

Structural Competency Meets Structural Racism: Race, Politics, and the Structure of Medical Knowledge

Jonathan M. Metzl and Dorothy E. Roberts
Virtual Mentor. 2014;16(9):674-690.
doi: 10.1001/virtualmentor.2014.16.9.spec1-1409.

Here is an excerpt:

The Clinical Implications of Addressing Race from a Structural Perspective

These brief case examples illustrate the complex ways that seemingly clinically relevant “cultural” characteristics and attitudes also reflect structural inequities, medical politics, legal codes, invisible discrimination, and socioeconomic disparities. Black men who appeared schizophrenic to medical practitioners did so in part because of the framing of new diagnostic codes. Lower-income persons who “refused” to eat well or exercise lived in neighborhoods without grocery stores or sidewalks. Black women who seemed to be uniquely harming their children by using crack cocaine while pregnant were victims of racial stereotyping, as well as of a selection bias in which decisions about which patients were reported to law enforcement depended on the racial and economic segregation of prenatal care. In this sense, approaches that attempt to address issues—such as the misdiagnosis of schizophrenia in black men, perceived diet “noncompliance” in minority populations, or the punishment of “crack mothers”—through a heuristic aimed solely at enhancing cross-cultural communication between doctors and patients, though surely well intentioned, will overlook the potentially pathologizing impact of structural factors set in motion long before patients or doctors enter exam rooms.

Structural factors impact majority populations as well as minority ones, and structures of privilege or opulence also influence expressions of illness and health. For instance, in the United States, research suggests that pediatricians disproportionately overdiagnose ADHD in white school-aged children. Until recently, medical researchers in many global locales assumed, wrongly, that eating disorders afflicted only affluent persons.

Yet of late, medicine and medical education have struggled most with addressing ways that structural forces impact and disadvantage communities of color. As sociologist Hannah Bradby rightly explains it, hypothesizing mechanisms that include the micro-processes of interactions between patients and professionals and the macro-processes of population-level inequalities is a missing step in our reasoning at present…. [A]s long as we see the solution to racism lying only in educating the individual, we fail to address the complexity of racism and risk alienating patients and physicians alike.

The info is here.

Friday, November 1, 2019

Can a Woman Rape a Man and Why Does It Matter?

Natasha McKeever
Criminal Law and Philosophy (2019)
13:599–619
https://doi.org/10.1007/s11572-018-9485-6

Abstract

Under current UK legislation, only a man can commit rape. This paper argues that this is an unjustified double standard that reinforces problematic gendered stereotypes about male and female sexuality. I first reject three potential justifications for making penile penetration a condition of rape: (1) it is physically impossible for a woman to rape a man; (2) it is a more serious offence to forcibly penetrate someone than to force them to penetrate you; (3) rape is a gendered crime. I argue that, as these justifications fail, a woman having sex with a man without his consent ought to be considered rape. I then explain some further reasons that this matters. I argue that, not only is it unjust, it is also both a cause and a consequence of harmful stereotypes and prejudices about male and female sexuality: (1) men are ‘always up for sex’; (2) women’s sexual purity is more important than men’s; (3) sex is something men do to women. Therefore, I suggest that, if rape law were made gender neutral, these stereotypes would be undermined and this might make some (albeit small) difference to the problematic ways that sexual relations are sometimes viewed between men and women more generally.

(cut)

3 Final Thoughts on Gender and Rape

The belief that a woman cannot rape a man, therefore, might be both a cause and a consequence of these kinds of harmful gendered stereotypical beliefs:

(a) Sex is something that men do to women.
(b) This is, in part, because men have an uncontrollable desire for sex; women are less bothered about sex.
(c) Due to men’s uncontrollable desire for sex, women must moderate their behaviour so that they don’t tempt men to rape them.
(d) Men are sexually aggressive/dominant (or should be); women are not  (or shouldn’t be).
(e) A woman’s worth is determined, in part, by her sexual purity; a man’s worth is determined, in part, by his sexual prowess.

Of course, these beliefs are outdated, and not held by all people. However, they are pervasive and we do see remnants of them in parts of Western society and in some non‑Western cultures.

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.

Saturday, January 26, 2019

People use less information than they think to make up their minds

Nadav Klein and Ed O’Brien
PNAS December 26, 2018 115 (52) 13222-13227

Abstract

A world where information is abundant promises unprecedented opportunities for information exchange. Seven studies suggest these opportunities work better in theory than in practice: People fail to anticipate how quickly minds change, believing that they and others will evaluate more evidence before making up their minds than they and others actually do. From evaluating peers, marriage prospects, and political candidates to evaluating novel foods, goods, and services, people consume far less information than expected before deeming things good or bad. Accordingly, people acquire and share too much information in impression-formation contexts: People overvalue long-term trials, overpay for decision aids, and overwork to impress others, neglecting the speed at which conclusions will form. In today’s information age, people may intuitively believe that exchanging ever-more information will foster better-informed opinions and perspectives—but much of this information may be lost on minds long made up.

Significance

People readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.

Tuesday, July 24, 2018

Data ethics is more than just what we do with data, it’s also about who’s doing it

James Arvantitakis, Andrew Francis, and Oliver Obst
The Conversation
Originally posted June 21, 2018

If the recent Cambridge Analytica data scandal has taught us anything, it’s that the ethical cultures of our largest tech firms need tougher scrutiny.

But moral questions about what data should be collected and how it should be used are only the beginning. They raise broader questions about who gets to make those decisions in the first place.

We currently have a system in which power over the judicious and ethical use of data is overwhelmingly concentrated among white men. Research shows that the unconscious biases that emerge from a person’s upbringing and experiences can be baked into technology, resulting in negative consequences for minority groups.

(cut)

People noticed that Google Translate showed a tendency to assign feminine gender pronouns to certain jobs and masculine pronouns to others – “she is a babysitter” or “he is a doctor” – in a manner that reeked of sexism. Google Translate bases its decision about which gender to assign to a particular job on the training data it learns from. In this case, it’s picking up the gender bias that already exists in the world and feeding it back to us.

If we want to ensure that algorithms don’t perpetuate and reinforce existing biases, we need to be careful about the data we use to train algorithms. But if we hold the view that women are more likely to be babysitters and men are more likely to be doctors, then we might not even notice – and correct for – biased data in the tools we build.

So it matters who is writing the code because the code defines the algorithm, which makes the judgement on the basis of the data.

The information is here.

Monday, February 19, 2018

Culture and Moral Distress: What’s the Connection and Why Does It Matter?

Nancy Berlinger and Annalise Berlinger
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 608-616.

Abstract

Culture is learned behavior shared among members of a group and from generation to generation within that group. In health care work, references to “culture” may also function as code for ethical uncertainty or moral distress concerning patients, families, or populations. This paper analyzes how culture can be a factor in patient-care situations that produce moral distress. It discusses three common, problematic situations in which assumptions about culture may mask more complex problems concerning family dynamics, structural barriers to health care access, or implicit bias. We offer sets of practical recommendations to encourage learning, critical thinking, and professional reflection among students, clinicians, and clinical educators.

Here is an excerpt:

Clinicians’ shortcuts for identifying “problem” patients or “difficult” families might also reveal implicit biases concerning groups. Health care professionals should understand the difference between cultural understanding that helps them respond to patients’ needs and concerns and implicit bias expressed in “cultural” terms that can perpetuate stereotypes or obscure understanding. A way to identify biased thinking that may reflect institutional culture is to consider these questions about advocacy:

  1. Which patients or families does our system expect to advocate for themselves?
  2. Which patients or families would we perceive or characterize as “angry” or “demanding” if they attempted to advocate for themselves?
  3. Which patients or families do we choose to advocate for, and on what grounds?
  4. What is our basis for each of these judgments?