Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, June 18, 2022

If you rise, I fall: Equality is prevented by the misperception that it harms advantaged groups

Brown, N. D., Jacoby-Senghor, D. S., 
& Raymundo, I. (2022). 
Science advances, 8(18)
https://doi.org/10.1126/sciadv.abm2385

Abstract

Nine preregistered studies (n = 4197) demonstrate that advantaged group members misperceive equality as necessarily harming their access to resources and inequality as necessarily benefitting them. Only when equality is increased within their ingroup, instead of between groups, do advantaged group members accurately perceive it as unharmful. Misperceptions persist when equality-enhancing policies offer broad benefits to society or when resources, and resource access, are unlimited. A longitudinal survey of the 2020 U.S. voters reveals that harm perceptions predict voting against actual equality-enhancing policies, more so than voters’ political and egalitarian beliefs. Finally two novel-groups experiments experiments reveal that advantaged participants’ harm misperceptions predict voting for inequality-enhancing policies that financially hurt them and against equality-enhancing policies that financially benefit them. Misperceptions persist even after an intervention to improve decision-making. This misperception that equality is necessarily zero-sum may explain why inequality prevails even as it incurs societal costs that harm everyone.

From the Discussion Section

Across nine studies, we show that advantaged group members misperceive equality-enhancing policies as harming their access to resources, even when the policies do no such thing. We identify this misperception across various inequality contexts (e.g., mortgage lending, salary, and hiring), various group boundaries (e.g., race, gender, disability, and arbitrary group distinctions), and different types of resources (e.g., money and jobs). Advantaged group members also misperceive policies that maintain the status quo or magnify inequality as improving their resource access, even when the policies actually leave them no better off. This tendency for advantaged group members to think that equality necessarily incurs a cost to their group lingered even when equality-enhancing policies mutually benefited disadvantaged and advantaged groups in a win-win fashion. That is, advantaged group members misperceive having greater inequality and fewer resources available to their group as more advantageous than having greater overall resources that were shared more equally.

We also find that these harm perceptions can have profound implications for individuals’ attitudinal and behavioral opposition to policies that promote equality. During the 2020 election, California Proposition 16 proposed relegalizing the use of affirmative action policies in the public sector. We find that the more white and Asian voters perceived that California Proposition 16 would harm their access to resources, the less likely they were to express support or vote for Proposition 16, independent of their political leaning. Moreover, we find that behavioral opposition occurs even when harm perceptions are objectively false and the effects of equality-enhancing policies are unambiguously positive. In an experimental setting, advantaged group participants were just as likely to vote for an inequality-enhancing policy that financially harmed them as they were to vote for an equality-enhancing policy that financially benefitted them. These studies suggest that real-world opposition to equality is likely caused by unduly negative perceptions of policies that could reduce inequality and unduly positive perceptions of policies that exacerbate it.


Friday, June 17, 2022

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

S. Tolmeijer, M. Christen, et al.
In CHI Conference on Human Factors in 
Computing Systems (CHI '22), April 29-May 5,
2022, New Orleans, LA, USA. ACM

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

From the Discussion Section

Design implications for ethical AI

In sum, we find that participants had slightly higher moral trust and more responsibility ascription towards human experts, but higher capacity trust, overall trust, and reliance on AI. These different perceived capabilities could be combined in some form of human-AI collaboration. However, lack of responsibility of the AI can be a problem when AI for ethical decision making is implemented. When a human expert is involved but has less autonomy, they risk becoming a scapegoat for the decisions that the AI proposed in case of negative outcomes.

At the same time, we find that the different levels of autonomy, i.e., the human-in-the-loop and human-on-the-loop setting, did not influence the trust people had, the responsibility they assigned (both to themselves and the respective experts), and the reliance they displayed. A large part of the discussion on usage of AI has focused on control and the level of autonomy that the AI gets for different tasks. However, our results suggest that this has less of an influence, as long a human is appointed to be responsible in the end. Instead, an important focus of designing AI for ethical decision making should be on the different types of trust users show for a human vs. AI expert.

One conclusion of this finding that the control conditions of AI may be of less relevance than expected is that the focus on human-AI collaboration should be less on control and more on how the involvement of AI improves human ethical decision making. An important factor in that respect will be the time available for actual decision making: if time is short, AI advice or decisions should make clear which value was guiding in the decision process (e.g., maximizing the expected number of people to be saved irrespective of any characteristics of the individuals involved), such that the human decider can make (or evaluate) the decision in an ethically informed way. If time for deliberation is available, a AI decision support system could be designed in a way to counteract human biases in ethical decision making (e.g., point to the possibility that human deciders solely focus on utility maximization and in this way neglecting fundamental rights of individuals) such that those biases can become part of the deliberation process.

Thursday, June 16, 2022

Record-High 50% of Americans Rate U.S. Moral Values as 'Poor'

Megan Brenan & Nicole Willcoxon
www.gallup.com
Originally posted 15 June 22

Story Highlights
  • 50% say state of moral values is "poor"; 37% "only fair"
  • 78% think moral values in the U.S. are getting worse
  • "Consideration of others" cited as top problem with state of moral values
A record-high 50% of Americans rate the overall state of moral values in the U.S. as "poor," and another 37% say it is "only fair." Just 1% think the state of moral values is "excellent" and 12% "good."

Although negative views of the nation's moral values have been the norm throughout Gallup's 20-year trend, the current poor rating is the highest on record by one percentage point.

These findings, from Gallup's May 2-22 Values and Beliefs poll, are generally in line with perceptions since 2017 except for a slight improvement in views in 2020 when Donald Trump was running for reelection. On average since 2002, 43% of U.S. adults have rated moral values in the U.S. as poor, 38% as fair and 18% as excellent or good.

Republicans' increasingly negative assessment of the state of moral values is largely responsible for the record-high overall poor rating. At 72%, Republicans' poor rating of moral values is at its highest point since the inception of the trend and up sharply since Trump left office.

At the same time, 36% of Democrats say the state of moral values is poor, while a 48% plurality rate it as only fair and 15% as excellent or good. Independents' view of the current state of moral values is relatively stable and closer to Democrats' than Republicans' rating, with 44% saying it is poor, 40% only fair and 16% excellent or good.

Outlook for State of Moral Values Is Equally Bleak

Not only are Americans feeling grim about the current state of moral values in the nation, but they are also mostly pessimistic about the future on the subject, as 78% say morals are getting worse and just 18% getting better. The latest percentage saying moral values are getting worse is roughly in line with the average of 74% since 2002, but it is well above the past two years' 67% and 68% readings.

Wednesday, June 15, 2022

A Constructionist Review of Morality and Emotions: No Evidence for Specific Links Between Moral Content and Discrete Emotions

Cameron, C. D., Lindquist, K. A., & Gray, K. (2015). 
Personality and Social Psychology Review
19(4), 371–394.

Abstract

Morality and emotions are linked, but what is the nature of their correspondence? Many “whole number” accounts posit specific correspondences between moral content and discrete emotions, such that harm is linked to anger, and purity is linked to disgust. A review of the literature provides little support for these specific morality–emotion links. Moreover, any apparent specificity may arise from global features shared between morality and emotion, such as affect and conceptual content. These findings are consistent with a constructionist perspective of the mind, which argues against a whole number of discrete and domain-specific mental mechanisms underlying morality and emotion. Instead, constructionism emphasizes the flexible combination of basic and domain-general ingredients such as core affect and conceptualization in creating the experience of moral judgments and discrete emotions. The implications of constructionism in moral psychology are discussed, and we propose an experimental framework for rigorously testing morality–emotion links.

Conclusion

The tension between whole number and constructionist accounts has existed in psychology since its beginning (e.g., Darwin, 1872/2005 vs. James, 1890; see Gendron & Barrett, 2009; Lindquist, 2013). Commonsense and essentialism suggest the existence of distinct and immutable psychological constructs. The intuitiveness of whole number accounts is reinforced by the communicative usefulness of distinguishing harm from purity (Graham et al., 2009), and anger from disgust (Barrett, 2006; Lindquist, Gendron, et al., 2013), but utility does not equal ontology. As decades of psychological research have demonstrated, intuitive experiences are poor guides to the structure of the mind (Barrett, 2009; Davies, 2009; James, 1890; Nisbett & Wilson, 1977; Roser & Gazzaniga, 2004; Ross & Ward, 1996; Wegner, 2003).  Although initially less intuitive, we suggest that constructionist approaches are actually better at capturing the nature of the powerful subjective phenomena long treasured by social psychologists (Gray & Wegner, 2013; Wegner & Gilbert, 2000). Whereas whole number theories impose taxonomies onto human experience and treat variability as noise or error, constructionist theories allow that experience is complex and messy. Rather than assuming that human experience is “wrong” when it fails to conform to a preferred taxonomy, constructionist theories appreciate this diversity and use domain-general mechanisms to explain it. Returning to our opening example, Jack and Diane may be soul-mates with a love that is unique, unchanging, and eternal, or they may just be two similar American kids who feel the rush of youth and the heat of a summer’s day. The first may be more romantic, but the second is more likely to be true.

Tuesday, June 14, 2022

Minority salience and the overestimation of individuals from minority groups in perception and memory

R. Kadosh, A. Y. Sklar, et al. 
PNAS (2022).
Vol 119 (12) 1-10.

Abstract

Our cognitive system is tuned toward spotting the uncommon and unexpected. We propose that individuals coming from minority groups are, by definition, just that—uncommon and often unexpected. Consequently, they are psychologically salient in perception, memory, and visual awareness. This minority salience creates a tendency to overestimate the prevalence of minorities, leading to an erroneous picture of our social environments—an illusion of diversity. In 12 experiments with 942 participants, we found evidence that the presence of minority group members is indeed overestimated in memory and perception and that masked images of minority group members are prioritized for visual awareness. These findings were consistent when participants were members of both the majority group and the minority group. Moreover, this overestimated prevalence of minorities led to decreased support for diversity-promoting policies. We discuss the theoretical implications of the illusion of diversity and how it may inform more equitable and inclusive decision-making.

Significance

Our minds are tuned to the uncommon or unexpected in our environment. In most environments, members of minority groups are just that—uncommon. Therefore, the cognitive system is tuned to spotting their presence. Our results indicate that individuals from minority groups are salient in perception, memory, and visual awareness. As a result, we consistently overestimate their presence—leading to an illusion of diversity: the environment seems to be more diverse than it actually is, decreasing our support for diversity-promoting measures. As we try to make equitable decisions, it is important that private individuals and decision-makers alike become aware of this biased perception. While these sorts of biases can be counteracted, one must first be aware of the bias.

Discussion

Taken together, our results from 12 experiments and 942 participants indicate that minority salience and overestimation are robust phenomena. We consistently overestimate the prevalence of individuals from minority groups and underestimate the prevalence of members from the majority group, thus perceiving our social environments as more diverse than they truly are. Our experiments also indicate that this effect maybe found at the level of priority for visual awareness and that it is social in nature: our social knowledge, our representation of the overall composition of our social environment, shapes this effect. Importantly, this illusion of diversity is consequential in that it leads to less support for measures to increase diversity.

Monday, June 13, 2022

San Diego doctor who smuggled hydroxychloroquine into US, sold medication as a COVID-19 cure sentenced

Hope Sloop
KSWB-TV San Diego
Originally posted 29 MAY 22

A San Diego doctor was sentenced Friday to 30 days of custody and one year of house arrest for attempting to smuggle hydroxychloroquine into the U.S. and sell COVID-19 "treatment kits" at the beginning of the pandemic.  

According to officials with the U.S. Department of Justice, Jennings Ryan Staley attempted to sell what he described as a "medical cure" for the coronavirus, which was really hydroxychloroquine powder that the physician had imported in from China by mislabeling the shipping container as "yam extract." Staley had attempted to replicate this process with another seller at one point, as well, but the importer told the San Diego doctor that they "must do it legally." 

Following the arrival of his shipment of the hydroxychloroquine powder, Staley solicited investors to help fund his operation to sell the filled capsules as a "medical cure" for COVID-19. The SoCal doctor told potential investors that he could triple their money within 90 days.  

Staley also told investigators via his plea agreement that he had written false prescriptions for hydroxychloroquine, using his associate's name and personal details without the employee's consent or knowledge.  

During an undercover operation, an agent purchased six of Staley's "treatment kits" for $4,000 and, during a recorded phone call, the doctor bragged about the efficacy of the kits and said, "I got the last tank of . . . hydroxychloroquine, smuggled out of China."  

Sunday, June 12, 2022

You Were Right About COVID, and Then You Weren’t

Olga Khazan
The Atlantic
Originally posted 3 MAY 22

Here are two excerpts:

Tenelle Porter, a psychologist at UC Davis, studies so-called intellectual humility, or the recognition that we have imperfect information and thus our beliefs might be wrong. Practicing intellectual humility, she says, is harder when you’re very active on the internet, or when you’re operating in a cutthroat culture. That might be why it pains me—a very online person working in the very competitive culture of journalism—to say that I was incredibly wrong about COVID at first. In late February 2020, when Smith was sounding the alarm among his co-workers, I had drinks with a colleague who asked me if I was worried about “this new coronavirus thing.”

“No!” I said. After all, I had covered swine flu, which blew over quickly and wasn’t very deadly.

A few days later, my mom called and asked me the same question. “People in Italy are staying inside their houses,” she pointed out.

“Yeah,” I said. “But SARS and MERS both stayed pretty localized to the regions they originally struck.”

Then, a few weeks later, when we were already working from home and buying dried beans, a friend asked me if she should be worried about her wedding, which was scheduled for October 2020.

“Are you kidding?” I said. “They will have figured out a vaccine or something by then.” Her wedding finally took place this month.

(cut)

Thinking like a scientist, or a scout, means “recognizing that every single one of your opinions is a hypothesis waiting to be tested. And every decision you make is an experiment where you forgot to have a control group,” Grant said. The best way to hold opinions or make predictions is to determine what you think given the state of the evidence—and then decide what it would take for you to change your mind. Not only are you committing to staying open-minded; you’re committing to the possibility that you might be wrong.

Because the coronavirus has proved volatile and unpredictable, we should evaluate it as a scientist would. We can’t hold so tightly to prior beliefs that we allow them to guide our behavior when the facts on the ground change. This might mean that we lose our masks one month and don them again the next, or reschedule an indoor party until after case numbers decrease. It might mean supporting strict lockdowns in the spring of 2020 but not in the spring of 2022. It might even mean closing schools again, if a new variant seems to attack children. We should think of masks and other COVID precautions not as shibboleths but like rain boots and umbrellas, as Ashish Jha, the White House coronavirus-response coordinator, has put it. There’s no sense in being pro- or anti-umbrella. You just take it out when it’s raining.

Saturday, June 11, 2022

No convincing evidence outgroups are denied uniquely human characteristics: Distinguishing intergroup preference from trait-based dehumanization

F. E. Enock, J. C. Flavell. et al. (2021).
Cognition
Volume 212, July 2021, 104682

Abstract

According to the dual model, outgroup members can be dehumanized by being thought to possess uniquely and characteristically human traits to a lesser extent than ingroup members. However, previous research on this topic has tended to investigate the attribution of human traits that are socially desirable in nature such as warmth, civility and rationality. As a result, it has not yet been possible to determine whether this form of dehumanization is distinct from intergroup preference and stereotyping. We first establish that participants associate undesirable (e.g., corrupt, jealous) as well as desirable (e.g., open-minded, generous) traits with humans. We then go on to show that participants tend to attribute desirable human traits more strongly to ingroup members but undesirable human traits more strongly to outgroup members. This pattern holds across three different intergroup contexts for which dehumanization effects have previously been reported: political opponents, immigrants and criminals. Taken together, these studies cast doubt on the claim that a trait-based account of representing others as ‘less human’ holds value in the study of intergroup bias.

Highlights

•  The dual model predicts outgroups are attributed human traits to a lesser extent.

•  To date, predominantly desirable traits have been investigated, creating a confound.

•  We test attributions of desirable and undesirable human traits to social groups.

•  Attributions of undesirable human traits were stronger for outgroups than ingroups.

•  We find no support for the predictions of the dual model of dehumanization.


From the General Discussion

The dual model argues that there are two sense of humanness: human uniqueness and human nature. Uniquely human traits can be summarised as civility, refinement, moral sensibility, rationality, and maturity. Human nature traits can be summarised as emotional responsiveness, interpersonal warmth, cognitive openness, agency, and depth (Haslam, 2006). However, the traits that supposedly characterise ‘humanness’ within this model are broadly socially desirable (Over, 2020a; Over, 2020b). We showed that people also associate some undesirable traits with the concept ‘human’. As well as considering humans to be refined and cultured, people also consider humans to be corrupt, selfish and cruel.

Results from our pretest provided us with grounds for re-examining predictions made by the dual model of dehumanization about the nature of intergroup bias in trait attributions. The dual model account holds that lesser attribution of human specific traits to outgroup members represents a psychological process of dehumanization that is separable from ingroup preference. However, as the human specific attributes summarised by the model are positive and socially desirable, it is possible that previous findings are better explained in terms of ingroup preference, the process of attributing positive qualities to ingroup members to a greater extent than to outgroup members.

Friday, June 10, 2022

Disrupting the System Constructively: Testing the Effectiveness of Nonnormative Nonviolent Collective Action

Shuman, E. (2020, June 21). 
PsyArXiv
https://doi.org/10.31234/osf.io/rvgup

Abstract

Collective action research tends to focus on motivations of the disadvantaged group, rather than on which tactics are effective at driving the advantaged group to make concessions to the disadvantaged. We focused on the potential of nonnormative nonviolent action as a tactic to generate support for concessions among advantaged group members who are resistant to social change. We propose that this tactic, relative to normative nonviolent and to violent action, is particularly effective because it reflects constructive disruption: a delicate balance between disruption (which can put pressure on the advantaged group to respond), and perceived constructive intentions (which can help ensure that the response to action is a conciliatory one). We test these hypotheses across four contexts (total N = 3650). Studies 1-3 demonstrate that nonnormative nonviolent action (compared to inaction, normative nonviolent action, and violent action) is uniquely effective at increasing support for concessions to the disadvantaged among resistant advantaged group members (compared to advantaged group members more open to social change). Study 3 shows that constructive disruption mediates this effect. Study 4 shows that perceiving a real-world ongoing protest as constructively disruptive predicts support for the disadvantaged, while Study 5 examines these processes longitudinally over 2 months in the context of an ongoing social movement. Taken together, we show that nonnormative nonviolent action can be an effective tactic for generating support for concessions to the disadvantaged among those who are most resistant because it generates constructive disruption.

From the General Discussion

Practical Implications

Based on this research, which collective action tactic should disadvantaged groups choose to advance their status? While a simple reading of these findings might suggest that nonnormative nonviolent action is the “most effective” form of action, a closer reading of these findings and other research (Saguy & Szekeres, 2018; Teixeira et al., 2020; Thomas & Louis, 2014) would suggest that what type of action is most effective depends on the goal. We demonstrate that nonnormative nonviolent action is effective for generating support for concessions to the protest that would advance its policy goals from those who were more resistant. On the other hand, other prior research has found that normative nonviolent action was more effective at turning sympathizers into active supporters (Teixeira et al., 2020; Thomas & Louis, 2014)16. Thus, which action tactic will be most useful to the disadvantaged may depend on the goal: If they are facing resistance from the advantaged blocking the achievement of their goals, nonnormative nonviolent action may be more effective. However, if the disadvantaged are seeking to build a movement that includes members of the advantaged group, then normative nonviolent action will likely be more effective. The question is thus not which tactic is “most effective”, but which tactic is most effective to achieve which goal for what audience.