Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, January 9, 2022

Through the Looking Glass: A Lens-Based Account of Intersectional Stereotyping

Petsko, C.D., Rosette, A.S. &
Bodenhausen, C.V. (2022)
Preprint
Journal of Personality & Social Psychology

Abstract

A growing body of scholarship documents the intersectional nature of social stereotyping, with stereotype content being shaped by a target person’s multiple social identities. However, conflicting findings in this literature highlight the need for a broader theoretical integration. For example, although there are contexts in which perceivers stereotype gay Black men and heterosexual Black men in very different ways, so too are there contexts in which perceivers stereotype these men in very similar ways. We develop and test an explanation for contradictory findings of this sort. In particular, we argue that perceivers have a repertoire of lenses in their minds—identity-specific schemas for categorizing others—and that characteristics of the perceiver and the social context determine which one of these lenses will be used to organize social perception. Perceivers who are using the lens of race, for example, are expected to attend to targets’ racial identities so strongly that they barely attend, in these moments, to targets’ other identities (e.g., their sexual orientations). Across six experiments, we show (1) that perceivers tend to use just one lens at a time when thinking about others, (2) that the lenses perceivers use can be singular and simplistic (e.g., the lens of gender by itself) or intersectional and complex (e.g., a race-by-gender lens, specifically), and (3) that different lenses can prescribe categorically distinct sets of stereotypes that perceivers use as frameworks for thinking about others. This lens-based account can resolve apparent contradictions in the literature on intersectional stereotyping, and it can likewise be used to generate novel hypotheses.

Lens Socialization and Acquisition

We have argued that perceivers use lenses primarily for epistemic purposes. Without lenses, the social world is perceptually ambiguous. With lenses, the social world is made perceptually clear. But how do people acquire lenses in the first place? And why are some lenses more frequently employed within a given culture than others? Reasonable answers to these questions come from developmental intergroup theory (Bigler & Liben, 2006, 2007). According to this perspective, children are motivated to understand their social worlds, and as a result, they actively seek to determine which bases for classifying people are important. One way in which children learn which bases of classification—or in our parlance, which lenses—are important is through their socialization experiences (Bigler et al., 2001; Gelman & Heyman, 1999). For example, educators in the U.S. often use language that explicitly references students’ gender groups (e.g., as when teachers say “good morning, boys and girls”), which reinforces children’s belief that the lens of gender is relevant toward the end of understanding who’s who (Bem, 1983). Another way in which people acquire lenses is through interaction with norms, laws, and institutions that, even if not explicitly referencing group divisions, nevertheless suggest that certain group divisions matter more than others (Allport, 1954; Bigler & Liben, 2007). For example, most neighborhoods in the United States are heavily segregated according to race and social class (e.g., Lichter et al., 2015; 2017). Such de facto segregation sends the message to children (and adults) that race and social class—and perhaps even their intersection—are relevant lenses for the purposes of understanding and making predictions about other people (e.g., Bonam et al., 2017). These processes, a broad mixture of socialization experiences and inductive reasoning about which group distinctions matter, are thought to give rise to lens acquisition.

Saturday, January 8, 2022

The Conflict Between People’s Urge to Punish AI and Legal Systems

Lima G, Cha M, Jeon C and Park KS
(2021) Front. Robot. AI 8:756242. 
doi: 10.3389/frobt.2021.756242

Abstract

Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people’s views of electronic legal personhood vis-à-vis existing liability models. Our study reveals people’s desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents’ punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents’ wrongdoings.

From Concluding Remarks

By no means this research proposes that robots and AI should be the sole entities to hold liability for their actions. In contrast, responsibility, awareness, and punishment were assigned to all associates. We thus posit that distributing liability among all entities involved in deploying these systems would follow the public perception of the issue. Such a model could take joint and several liability models as a starting point by enforcing the proposal that various entities should be held jointly liable for damages.

Our work also raises the question of whether people wish to punish AI and robots for reasons other than retribution, deterrence, and reform. For instance, the public may punish electronic agents for general or indirect deterrence (Twardawski et al., 2020). Punishing an AI could educate humans that a specific action is wrong without the negative consequences of human punishment. Recent literature in moral psychology also proposes that humans might strive for a morally coherent world, where seemingly contradictory judgments arise so that the public perception of agents’ moral qualities match the moral qualities of their actions’ outcomes (Clark et al., 2015). We highlight that legal punishment is not only directed at the wrongdoer but also fulfills other functions in society that future work should inquire about when dealing with automated agents. Finally, our work poses the question of whether proactive actions towards holding existing legal persons liable for harms caused by automated agents would compensate for people’s desire to punish them. For instance, future work might examine whether punishing a system’s manufacturer may decrease the extent to which people punish AI and robots. Even if the responsibility gap can be easily solved, conflicts between the public and legal institutions might continue to pose challenges to the successful governance of these new technologies.

We selected scenarios from active areas of AI and robotics (i.e., medicine and war; see SI). People’s moral judgments might change depending on the scenario or background. The proposed scenarios did not introduce, for the sake of feasibility and brevity, much of the background usually considered when judging someone’s actions legally. We did not control for any previous attitudes towards AI and robots or knowledge of related areas, such as law and computer science, which could result in different judgments among the participants.

Friday, January 7, 2022

Moral Appraisals Guide Intuitive Legal Determinations

B. Flanagan, G.F.C.F. de Almeida, et al.
researchgate.net

Abstract 

Socialization demands the capacity to observe a plethora of private, legal, and institutional rules.  To accomplish this,  individuals must grasp rules’ meaning and infer the class of conduct each proscribes.  Yet this basic account neglects important nuance in the way we reason about complex cases in which a rule’s literal or textualist interpretation conflicts with deeper values.  In six studies (total N = 2541), we examined legal determinations through the lens of these cases.  We found that moral appraisals—of the  rule’s value (Study  1) and the agent’s character (Studies 2-3)—shaped people’s application of rules, driving counter-literal legal determinations. These effects were stronger under time pressure and were weakened by the opportunity to reflect (Study  4). Our final studies explored the role of theory of mind: Textualist judgments arose when agents were described as cognizant of the rule’s text yet ignorant of its deeper purpose (Study 5). Meanwhile, the intuitive tendency toward counter-literal determinations was strongest when the rule’s purpose could be inferred from its text—pointing  toward an influence  of  spontaneous mental state ascriptions (Studies  6a-6b). Together, our results elucidate the cognitive basis  of  legal reasoning: Intuitive legal determinations build on core competencies in moral cognition, including mental state and character inferences.  In turn, cognitive control dampens these effects, promoting a broadly textualist response pattern.

General Discussion 

Our present studies suggest that moral appraisals shape people’s determinations of whether various rules  have  been  violated.  Counter-literal  judgments emerge when agents violate a rule’s morally laudable purpose, but not when they violate a rule’s evil purpose (Study 1). An impact of moral appraisals  is observed even  when manipulating the transgressor’s broader moral character—such that blameworthy  agents are deemed to violate rules to a greater extent than praiseworthy agents, even when both behaviors fall within the literal scope of the rule (Study 2).  These effects persist when applying two further  robustness checks: (i) when encouraging participants to concurrently and independently  evaluate the  morality as well as the legality of the  target behaviors,  and  (ii)  when  explicitly  denying  any  constitutional constraints on the moral propriety of legal or private rules (Study 3). Turning our attention to the  underlying cognitive mechanisms,  we found that applying time pressure promoted counter-literal judgments (Study 4), suggesting that such decisions are  driven by automatic cognitive  processes.  We  then examined how representations of the agent’s knowledge impacted rule application: Stipulating the agent’s ignorance of the rule’s underlying purpose helped to explain the default tendency toward textualist determinations (Study 5). Finally, we uncovered an effect of spontaneous mental state inferences on  judgments of whether rules had been violated: Participants appeared to automatically represent the likelihood of inferring the rule’s true purpose from its text, and the inferability of a rule’s purpose yielded  greater counter-literal tendencies (Studies 6a-6b)—regardless of the agent’s actual knowledge status. 


In essence, an individual's moral judgments affect their interpretation of laws, and biases the decision-making process.

Thursday, January 6, 2022

Lizard people, deadly orgies and JFK: How QAnon hijacked Hollywood to spread conspiracies

Anita Chabria
The Los Angeles Times
Originally posted 7 DEC 21

Here is an excerpt:

Bloom, the extremist researcher, said the familiarity of recycled Hollywood plots may be part of what eases followers into QAnon’s depths: Although the claims are outlandish, they tickle at recollections, whether from fiction or reality.

That sense of recognition gives them a level of believability, said Bloom — an “I’ve heard that before” effect. Part of the QAnon logic, she said, is that films and television shows that contain the conspiratorial story lines are viewed by believers as kernels of truth dropped by the elites — a sort of taunting acknowledgment of their misdeeds.

“Part of the idea is that Hollywood has been doing this for ages and ages, and they have been hiding in plain sight by putting it in film,” Bloom said.

That idea of using art to hide life is sometimes reinforced by actual events, she said. She uses the example of the 1999 Stanley Kubrick film “Eyes Wide Shut,” about a New York doctor, played by Tom Cruise, who stumbles into a deadly orgy attended by society’s power players. Some QAnon adherents believe that there is a cabal of influential pedophiles who murder children during ritualistic events to harvest a hormone that provides eternal youth and that the film was a nod to that activity.

She points to the Jeffrey Epstein case, and the current trial of his confidante, Ghislaine Maxwell, as factual instances of high-profile sex-trafficking allegations that seem pulled from those story lines, and have now been folded into the QAnon narratives.

“That’s one of the reasons some of the more outlandish things resonate, is because it sort of seems plausible,” Bloom said.

She also points out that the fear of sacrifices fits with the antisemitic trajectory of QAnon — it ties into centuries-old conspiracies about “blood libel” — the false belief of Jewish people killing Christians for their blood — which in turn can be tied to myths of European vampires.

Wednesday, January 5, 2022

Outrage Fatigue? Cognitive Costs and Decisions to Blame

Bambrah, V., Cameron, D., & Inzlicht, M.
(2021, November 30).

Abstract

Across nine studies (N=1,672), we assessed the link between cognitive costs and the choice to express outrage by blaming. We developed the Blame Selection Task, a binary free-choice paradigm that examines the propensity to blame transgressors (versus an alternative choice)—either before or after reading vignettes and viewing images of moral transgressions. We hypothesized that participants’ choice to blame wrongdoers would negatively relate to how cognitively inefficacious, effortful, and aversive blaming feels (compared to the alternative choice). With vignettes, participants approached blaming and reported that blaming felt more efficacious. With images, participants avoided blaming and reported that blaming felt more inefficacious, effortful, and aversive. Blame choice was greater for vignette-based transgressions than image-based transgressions. Blame choice was positively related to moral personality constructs, blame-related social-norms, and perceived efficacy of blaming, and inversely related to perceived effort and aversiveness of blaming. The BST is a valid behavioral index of blame propensity, and choosing to blame is linked to its cognitive costs.

Discussion

Moral norm violations cause people to experience moral outrage and to express it in various ways (Crockett, 2017), such as shaming/dehumanizing, punishing, or blaming. These forms of expressing outrage are less than moderately related to one another (r’s < .30; see Bastian et al., 2013 for more information), which suggests that a considerable amount of variance between shaming/dehumanizing, punishing, and blaming remains unexplained and that these are distinct enough demonstrations of outragein response to norm violations. Yet, despite its moralistic implications (see Crockett, 2017), there is still little empirical work not only on the phenomenon of outrage fatigue but also on the role of motivated cognition on expressing outrage via blame. Social costs alter blame judgments, even when people’s cognitive resources are depleted (Monroe & Malle, 2019). But how do the inherent cognitive costs of blaming relate to people’s decisions towards moral outrage and blame? Here, we examined how felt cognitive costs associate with the choice to express outrage through blame.

Tuesday, January 4, 2022

Changing impressions in competence-oriented domains: The primacy of morality endures

A. Luttrella, S. Sacchib, & M. Brambillab
Journal of Experimental Social Psychology
Volume 98, January 2022, 104246

Abstract

The Moral Primacy Model proposes that throughout the multiple stages of developing impressions of others, information about the target's morality is more influential than information about their competence or sociability. Would morality continue to exert outsized influence on impressions in the context of a decision for which people view competence as the most important attribute? In three experiments, we used an impression updating paradigm to test how much information about a target's morality versus competence changed perceivers' impressions of a job candidate. Despite several pilot studies in which people said they would prioritize competence over morality when deciding to hire a potential employee, results of the main studies reveal that impressions changed more when people received new information about a target's immorality than about his incompetence. This moral primacy effect held both for global impressions and willingness to hire the target, but direct effects on evaluations of the target as an employee did not consistently emerge. When the new information about the target was positive, we did not reliably observe a moral primacy effect. These findings provide important insight on the generalizability of moral primacy in impression updating.

Highlights

• People reported that hiring decisions should favor competence over morality.

• Impressions of a job candidate changed more based on his morality (vs. competence).

• Moral primacy in this context emerged only when the new information was negative.

• Moral primacy occurred for general impressions more than hiring-specific judgments.

Conclusion

In sum, we tested the boundaries of moral primacy and found that even in a context where other dimensions could dominate, information about a job candidate's immorality continued to have disproportionate influence on general impressions of him and evaluations of his suitability as an employee. However, our findings further show that the relative effect of negative moral versus competence information on domain-specific judgments tended to be smaller than effects on general impressions. In addition, unlike prior research on impression updating (Brambilla et al., 2019), we observed no evidence for moral primacy in this context when the new information was positive (although this pattern may be indicative of a more general valence asymmetry in the effects of morally relevant information). Together, these findings provide an important extension of the Moral Primacy Model but also provide useful insight on the generalizability of the effect.

Monday, January 3, 2022

Systemic Considerations in Child Development and the Pursuit of Racial Equality in the United States

Perry, S., Skinner-Dorkenoo, A. L., 
Wages, J., & Abaied, J. L. (2021, October 8). 

Abstract

In this commentary on Lewis’ (2021) article in Psychological Inquiry, we expand on ways that both systemic and interpersonal contexts contribute to and uphold racial inequalities, with a particular focus on research on child development and socialization. We also discuss the potential roadblocks that may undermine the effectiveness of Lewis’ (2021) recommended strategy of relying on experts as a driving force for change. We conclude by proposing additional strategies for pursuing racial equality that may increase the impact of experts, such as starting anti-racist socialization early in development, family-level interventions, and teaching people about racial injustices and their connections to systemic racism.

From the Conclusion

Ultimately, the expert (Myrdal) concluded that the problem was White people and how they think about and structure society. Despite the immense popularity of his book among the American public and the fact that it did motivate some policy change (Brown v. Board of Education, Warren& Supreme Court of The United States, 1953), many of the same issues persist to this day. As such, we argue that, although relying on experts may be an appealing recommendation, history suggests that our efforts to reduce racial inequality in the U.S. will require substantial, widespread investment from White U.S. residents in order for real change to occur. Based on the literature reviewed here, significant barriers to such investment remain, many of which begin in early childhood. Beyond pursuing policies that promote structural equality on the advice of experts in ways that do not trigger backlash, we should support policies that educate the public—with a special emphasis on childhood socialization—on the history of systemic racism and the past and continued intentional efforts to create and maintain racial inequalities. 

Building upon recommendations offered by Lewis, we also argue that we need to move the societal bar from simply being non-racist, to being actively anti-racist. As a society, we need to recalibrate our norms, such that passively going along with systemic racism will no longer be acceptable (Tatum, 2017). In the summer of 2020, after the police killings of George Floyd and Breonna Taylor, many organizations released statements in support of the Black Lives Movement, confronting systemic racism, and increasing social justice (Nguyen, 2020). But one question that many posed was whether these organizations and institutions were genuinely committed to tackling systemic racism, or if their acts were performative (Duarte, 2020). If groups, organizations, and institutions want to claim that they are committed to anti-racism, then they should be held accountable for these claims and provide concrete evidence of their efforts to dismantle the pervasive system of racial oppression. In addition to this, we recommend a greater investment in educating the public on the history of systemic racism (particularly with children; such as the Ethnic Studies Model Curriculum implemented in the state of California), prompting White parents to actively be anti-racist and teach their children to do the same, and equitable structural policies that facilitate residential and school racial integration to increase quality interracial contact.

Sunday, January 2, 2022

Towards a Theory of Justice for Artificial Intelligence

Iason Gabriel
Forthcoming in Daedelus vol. 151, 
no. 2, Spring 2022

Abstract 

This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes -- something that requires specific attention be paid to the impact they have on the worst-off members of society.

Here is the conclusion:

Second, the demand for public justification in the context of AI deployment may well extend beyond the basic structure. As Langdon Winner argues, when the impact of a technology is sufficiently great, this fact is, by itself, sufficient to generate a free-standing requirement that citizens be consulted and given an opportunity to influence decisions.  Absent such a right, citizens would cede too much control over the future to private actors – something that sits in tension with the idea that they are free and equal. Against this claim, it might be objected that it extends the domain of political justification too far – in a way that risks crowding out room for private experimentation, exploration, and the development of projects by citizens and organizations. However, the objection rests upon the mistaken view that autonomy is promoted by restricting the scope of justificatory practices to as narrow a subject matter as possible. In reality this is not the case: what matters for individual liberty is that practices that have the potential to interfere with this freedom are appropriately regulated so that infractions do not come about. Understood in this way, the demand for public justification stands in opposition not to personal freedom but to forms of unjust imposition.

The call for justice in the context of AI is well-founded. Looked at through the lens of distributive justice, key principles that govern the fair organization of our social, political and economic institutions, also apply to AI systems that are embedded in these practices. One major consequence of this is that liberal and egalitarian norms of justice apply to AI tools and services across a range of contexts. When they are integrated into society’s basic structure, these technologies should support citizens’ basic liberties, promote fair equality of opportunity, and provide the greatest benefit to those who are worst-off. Moreover, deployments of AI outside of the basic structure must still be compatible with the institutions and values that justice requires. There will always be valid reasons, therefore, to consider the relationship of technology to justice when it comes to the deployment of AI systems.

Saturday, January 1, 2022

New billing disclosure requirements take effect in 2022

American Psychological Association
Originally published 10 December 21

All mental health providers will need to provide estimated costs of services before starting treatment.

Beginning January 1, 2022, psychologists and other health care providers will be required by law to give uninsured and self-pay patients a good faith estimate of costs for services that they offer, when scheduling care or when the patient requests an estimate.

This new requirement was finalized in regulations issued October 7, 2021. The regulations implement part of the “No Surprises Act,” enacted in December 2020 as part of a broad package of COVID- and spending-related legislation. The act aims to reduce the likelihood that patients may receive a “surprise” medical bill by requiring that providers inform patients of an expected charge for a service before the service is provided. The government will also soon issue regulations requiring psychologists to give good faith estimates to commercial or government insurers, when the patient has insurance and plans to use it.

Psychologists working in group practices or larger organizational settings and facilities will likely receive direction from their compliance department or lawyers on how to satisfy this new requirement.

Read on for answers to FAQs that apply to practicing psychologists who treat uninsured or self-pay patients.

What providers and what services are subject to this rule?
“Provider” is defined broadly to include any health care provider who is acting within the scope of the provider’s license or certification under applicable state law. Psychologists meet that definition. 

The definition of “items and services” for which the good faith estimate must be provided is also broadly defined to encompass “all encounters, procedures, medical tests, … provided or assessed in connection with the provision of health care.” Services related to mental health substance use disorders are specifically included.

What steps do I need to take and when?

Psychologists are ethically obligated to discuss fees with patients upfront. This new requirement builds on that by adding more structure and specific timeframes for action.


Note: Compliance is not optional.  This is a new, consumer protection, health-care law in the United States.