Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, January 6, 2022

Lizard people, deadly orgies and JFK: How QAnon hijacked Hollywood to spread conspiracies

Anita Chabria
The Los Angeles Times
Originally posted 7 DEC 21

Here is an excerpt:

Bloom, the extremist researcher, said the familiarity of recycled Hollywood plots may be part of what eases followers into QAnon’s depths: Although the claims are outlandish, they tickle at recollections, whether from fiction or reality.

That sense of recognition gives them a level of believability, said Bloom — an “I’ve heard that before” effect. Part of the QAnon logic, she said, is that films and television shows that contain the conspiratorial story lines are viewed by believers as kernels of truth dropped by the elites — a sort of taunting acknowledgment of their misdeeds.

“Part of the idea is that Hollywood has been doing this for ages and ages, and they have been hiding in plain sight by putting it in film,” Bloom said.

That idea of using art to hide life is sometimes reinforced by actual events, she said. She uses the example of the 1999 Stanley Kubrick film “Eyes Wide Shut,” about a New York doctor, played by Tom Cruise, who stumbles into a deadly orgy attended by society’s power players. Some QAnon adherents believe that there is a cabal of influential pedophiles who murder children during ritualistic events to harvest a hormone that provides eternal youth and that the film was a nod to that activity.

She points to the Jeffrey Epstein case, and the current trial of his confidante, Ghislaine Maxwell, as factual instances of high-profile sex-trafficking allegations that seem pulled from those story lines, and have now been folded into the QAnon narratives.

“That’s one of the reasons some of the more outlandish things resonate, is because it sort of seems plausible,” Bloom said.

She also points out that the fear of sacrifices fits with the antisemitic trajectory of QAnon — it ties into centuries-old conspiracies about “blood libel” — the false belief of Jewish people killing Christians for their blood — which in turn can be tied to myths of European vampires.

Wednesday, January 5, 2022

Outrage Fatigue? Cognitive Costs and Decisions to Blame

Bambrah, V., Cameron, D., & Inzlicht, M.
(2021, November 30).

Abstract

Across nine studies (N=1,672), we assessed the link between cognitive costs and the choice to express outrage by blaming. We developed the Blame Selection Task, a binary free-choice paradigm that examines the propensity to blame transgressors (versus an alternative choice)—either before or after reading vignettes and viewing images of moral transgressions. We hypothesized that participants’ choice to blame wrongdoers would negatively relate to how cognitively inefficacious, effortful, and aversive blaming feels (compared to the alternative choice). With vignettes, participants approached blaming and reported that blaming felt more efficacious. With images, participants avoided blaming and reported that blaming felt more inefficacious, effortful, and aversive. Blame choice was greater for vignette-based transgressions than image-based transgressions. Blame choice was positively related to moral personality constructs, blame-related social-norms, and perceived efficacy of blaming, and inversely related to perceived effort and aversiveness of blaming. The BST is a valid behavioral index of blame propensity, and choosing to blame is linked to its cognitive costs.

Discussion

Moral norm violations cause people to experience moral outrage and to express it in various ways (Crockett, 2017), such as shaming/dehumanizing, punishing, or blaming. These forms of expressing outrage are less than moderately related to one another (r’s < .30; see Bastian et al., 2013 for more information), which suggests that a considerable amount of variance between shaming/dehumanizing, punishing, and blaming remains unexplained and that these are distinct enough demonstrations of outragein response to norm violations. Yet, despite its moralistic implications (see Crockett, 2017), there is still little empirical work not only on the phenomenon of outrage fatigue but also on the role of motivated cognition on expressing outrage via blame. Social costs alter blame judgments, even when people’s cognitive resources are depleted (Monroe & Malle, 2019). But how do the inherent cognitive costs of blaming relate to people’s decisions towards moral outrage and blame? Here, we examined how felt cognitive costs associate with the choice to express outrage through blame.

Tuesday, January 4, 2022

Changing impressions in competence-oriented domains: The primacy of morality endures

A. Luttrella, S. Sacchib, & M. Brambillab
Journal of Experimental Social Psychology
Volume 98, January 2022, 104246

Abstract

The Moral Primacy Model proposes that throughout the multiple stages of developing impressions of others, information about the target's morality is more influential than information about their competence or sociability. Would morality continue to exert outsized influence on impressions in the context of a decision for which people view competence as the most important attribute? In three experiments, we used an impression updating paradigm to test how much information about a target's morality versus competence changed perceivers' impressions of a job candidate. Despite several pilot studies in which people said they would prioritize competence over morality when deciding to hire a potential employee, results of the main studies reveal that impressions changed more when people received new information about a target's immorality than about his incompetence. This moral primacy effect held both for global impressions and willingness to hire the target, but direct effects on evaluations of the target as an employee did not consistently emerge. When the new information about the target was positive, we did not reliably observe a moral primacy effect. These findings provide important insight on the generalizability of moral primacy in impression updating.

Highlights

• People reported that hiring decisions should favor competence over morality.

• Impressions of a job candidate changed more based on his morality (vs. competence).

• Moral primacy in this context emerged only when the new information was negative.

• Moral primacy occurred for general impressions more than hiring-specific judgments.

Conclusion

In sum, we tested the boundaries of moral primacy and found that even in a context where other dimensions could dominate, information about a job candidate's immorality continued to have disproportionate influence on general impressions of him and evaluations of his suitability as an employee. However, our findings further show that the relative effect of negative moral versus competence information on domain-specific judgments tended to be smaller than effects on general impressions. In addition, unlike prior research on impression updating (Brambilla et al., 2019), we observed no evidence for moral primacy in this context when the new information was positive (although this pattern may be indicative of a more general valence asymmetry in the effects of morally relevant information). Together, these findings provide an important extension of the Moral Primacy Model but also provide useful insight on the generalizability of the effect.

Monday, January 3, 2022

Systemic Considerations in Child Development and the Pursuit of Racial Equality in the United States

Perry, S., Skinner-Dorkenoo, A. L., 
Wages, J., & Abaied, J. L. (2021, October 8). 

Abstract

In this commentary on Lewis’ (2021) article in Psychological Inquiry, we expand on ways that both systemic and interpersonal contexts contribute to and uphold racial inequalities, with a particular focus on research on child development and socialization. We also discuss the potential roadblocks that may undermine the effectiveness of Lewis’ (2021) recommended strategy of relying on experts as a driving force for change. We conclude by proposing additional strategies for pursuing racial equality that may increase the impact of experts, such as starting anti-racist socialization early in development, family-level interventions, and teaching people about racial injustices and their connections to systemic racism.

From the Conclusion

Ultimately, the expert (Myrdal) concluded that the problem was White people and how they think about and structure society. Despite the immense popularity of his book among the American public and the fact that it did motivate some policy change (Brown v. Board of Education, Warren& Supreme Court of The United States, 1953), many of the same issues persist to this day. As such, we argue that, although relying on experts may be an appealing recommendation, history suggests that our efforts to reduce racial inequality in the U.S. will require substantial, widespread investment from White U.S. residents in order for real change to occur. Based on the literature reviewed here, significant barriers to such investment remain, many of which begin in early childhood. Beyond pursuing policies that promote structural equality on the advice of experts in ways that do not trigger backlash, we should support policies that educate the public—with a special emphasis on childhood socialization—on the history of systemic racism and the past and continued intentional efforts to create and maintain racial inequalities. 

Building upon recommendations offered by Lewis, we also argue that we need to move the societal bar from simply being non-racist, to being actively anti-racist. As a society, we need to recalibrate our norms, such that passively going along with systemic racism will no longer be acceptable (Tatum, 2017). In the summer of 2020, after the police killings of George Floyd and Breonna Taylor, many organizations released statements in support of the Black Lives Movement, confronting systemic racism, and increasing social justice (Nguyen, 2020). But one question that many posed was whether these organizations and institutions were genuinely committed to tackling systemic racism, or if their acts were performative (Duarte, 2020). If groups, organizations, and institutions want to claim that they are committed to anti-racism, then they should be held accountable for these claims and provide concrete evidence of their efforts to dismantle the pervasive system of racial oppression. In addition to this, we recommend a greater investment in educating the public on the history of systemic racism (particularly with children; such as the Ethnic Studies Model Curriculum implemented in the state of California), prompting White parents to actively be anti-racist and teach their children to do the same, and equitable structural policies that facilitate residential and school racial integration to increase quality interracial contact.

Sunday, January 2, 2022

Towards a Theory of Justice for Artificial Intelligence

Iason Gabriel
Forthcoming in Daedelus vol. 151, 
no. 2, Spring 2022

Abstract 

This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes -- something that requires specific attention be paid to the impact they have on the worst-off members of society.

Here is the conclusion:

Second, the demand for public justification in the context of AI deployment may well extend beyond the basic structure. As Langdon Winner argues, when the impact of a technology is sufficiently great, this fact is, by itself, sufficient to generate a free-standing requirement that citizens be consulted and given an opportunity to influence decisions.  Absent such a right, citizens would cede too much control over the future to private actors – something that sits in tension with the idea that they are free and equal. Against this claim, it might be objected that it extends the domain of political justification too far – in a way that risks crowding out room for private experimentation, exploration, and the development of projects by citizens and organizations. However, the objection rests upon the mistaken view that autonomy is promoted by restricting the scope of justificatory practices to as narrow a subject matter as possible. In reality this is not the case: what matters for individual liberty is that practices that have the potential to interfere with this freedom are appropriately regulated so that infractions do not come about. Understood in this way, the demand for public justification stands in opposition not to personal freedom but to forms of unjust imposition.

The call for justice in the context of AI is well-founded. Looked at through the lens of distributive justice, key principles that govern the fair organization of our social, political and economic institutions, also apply to AI systems that are embedded in these practices. One major consequence of this is that liberal and egalitarian norms of justice apply to AI tools and services across a range of contexts. When they are integrated into society’s basic structure, these technologies should support citizens’ basic liberties, promote fair equality of opportunity, and provide the greatest benefit to those who are worst-off. Moreover, deployments of AI outside of the basic structure must still be compatible with the institutions and values that justice requires. There will always be valid reasons, therefore, to consider the relationship of technology to justice when it comes to the deployment of AI systems.

Saturday, January 1, 2022

New billing disclosure requirements take effect in 2022

American Psychological Association
Originally published 10 December 21

All mental health providers will need to provide estimated costs of services before starting treatment.

Beginning January 1, 2022, psychologists and other health care providers will be required by law to give uninsured and self-pay patients a good faith estimate of costs for services that they offer, when scheduling care or when the patient requests an estimate.

This new requirement was finalized in regulations issued October 7, 2021. The regulations implement part of the “No Surprises Act,” enacted in December 2020 as part of a broad package of COVID- and spending-related legislation. The act aims to reduce the likelihood that patients may receive a “surprise” medical bill by requiring that providers inform patients of an expected charge for a service before the service is provided. The government will also soon issue regulations requiring psychologists to give good faith estimates to commercial or government insurers, when the patient has insurance and plans to use it.

Psychologists working in group practices or larger organizational settings and facilities will likely receive direction from their compliance department or lawyers on how to satisfy this new requirement.

Read on for answers to FAQs that apply to practicing psychologists who treat uninsured or self-pay patients.

What providers and what services are subject to this rule?
“Provider” is defined broadly to include any health care provider who is acting within the scope of the provider’s license or certification under applicable state law. Psychologists meet that definition. 

The definition of “items and services” for which the good faith estimate must be provided is also broadly defined to encompass “all encounters, procedures, medical tests, … provided or assessed in connection with the provision of health care.” Services related to mental health substance use disorders are specifically included.

What steps do I need to take and when?

Psychologists are ethically obligated to discuss fees with patients upfront. This new requirement builds on that by adding more structure and specific timeframes for action.


Note: Compliance is not optional.  This is a new, consumer protection, health-care law in the United States.

Friday, December 31, 2021

Dear White People: Here Are 5 Uncomfortable Truths Black Colleagues Need You To Know

Dana Brownlee
Forbes.com
Originally posted 16 June 2020

While no one has a precise prescription for how to eradicate racial injustice in the workplace, I firmly believe that a critical first step is embracing the difficult conversations and uncomfortable truths that we’ve become too accustomed to avoiding. The baseline uncomfortable truth is that blacks and whites in corporate America often maintain their own subcultures – including very different informal conversations in the workplace - with surprisingly little overlap at times. To be perfectly honest, as a black woman who has worked in and around corporate America for nearly 30 years, I’ve typically only been privy to the black side of the conversation, but I think in this moment where everyone is looking for opportunities to either teach, learn or grow, it’s instructive if not necessary to break down the traditional siloes and speak the unspeakable. So in this vein I’m sharing five critical “truths” that I feel many black people in corporate settings would vehemently discuss in “private” but not necessarily assert in “public.”

Here are the 5, plus a bonus.

Truth #1 - Racism doesn’t just show up in its most extreme form. There is indeed a continuum (of racist thoughts and behaviors), and you may be on it.

Truth #2 – Even if you personally haven’t offended anyone (that you know of), you may indeed be part of the problem.

Truth #3 – Every black person on your team is not your “friend.”

Truth #4 – Gender and race discrimination are not “essentially the same.”

Truth #5 – Even though there may be one or two black faces in leadership, your organization may indeed have a rampant racial injustice problem.

Bonus Truth #6: You can absolutely be part of the solution.

As workplaces tackle racism with a renewed sense of urgency amidst the worldwide Black Lives Matter protests, it’s imperative that they approach the problem of racism as they would any other serious business problem – methodically, intensely and with a sense of urgency and conviction.

Thursday, December 30, 2021

When Helping Is Risky: The Behavioral and Neurobiological Trade-off of Social and Risk Preferences

Gross, J., Faber, N. S., et al.  (2021).
Psychological Science, 32(11), 1842–1855.
https://doi.org/10.1177/09567976211015942

Abstract

Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual’s valuation of others’ well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.

From the Discussion

From a practical perspective, both methylphenidate (sold under the trade name Ritalin) and atomoxetine (sold under the trade name Strattera) are prescription drugs used to treat attention-deficit/hyperactivity disorder and are regularly used off-label by people who aim to enhance their cognitive performance (Maier et al., 2018). Thus, our results have implications for the ethics of and policy for the use of psychostimulants. Indeed, the Global Drug Survey taken in 2015 and 2017 revealed that 3.2% and 6.6% of respondents, respectively, reported using psychostimulants such as methylphenidate for cognitive enhancement (Maier et al., 2018). Both in the professional ethical debate as well as in the general public, concerns about the medical safety and the fairness of such cognitive enhancements are discussed (Faber et al., 2016). However, our finding that methylphenidate alters helping behavior through increased risk seeking demonstrates that substances aimed at changing cognitive functioning can also influence social behavior. Such “social” side effects of cognitive enhancement (whether deemed positive or negative) are currently unknown to both users and administrators and thus do not receive much attention in the societal debate about psychostimulant use (Faulmüller et al., 2013).

Wednesday, December 29, 2021

Delphi: Towards Machine Ethics and Norms

Jiang, L., et al. (2021). 
ArXiv, abs/2110.07574.

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state ("thou shalt not kill"), applying such rules to real-world situations is far more complex. For example, while "helping a friend" is generally a good thing to do, "helping a friend spread fake news" is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

Our paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present Commonsense Norm Bank, a moral textbook customized for machines, which compiles 1.7M examples of people's ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study provides new insights that lead to several important open research questions: differentiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

From the Conclusion

Delphi’s impressive performance on machine moral reasoning under diverse compositional real-life situations, highlights the importance of developing high-quality human-annotated datasets for people’s moral judgments. Finally, we demonstrate through systematic probing that Delphi still struggles with situations dependent on time or diverse cultures, and situations with social and demographic bias implications. We discuss the capabilities and limitations of Delphi throughout this paper and identify key directions in machine ethics for future work. We hope that our work opens up important avenues for future research in the emerging field of machine ethics, and we encourage collective efforts from our research community to tackle these research challenges.