Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, January 4, 2022

Changing impressions in competence-oriented domains: The primacy of morality endures

A. Luttrella, S. Sacchib, & M. Brambillab
Journal of Experimental Social Psychology
Volume 98, January 2022, 104246

Abstract

The Moral Primacy Model proposes that throughout the multiple stages of developing impressions of others, information about the target's morality is more influential than information about their competence or sociability. Would morality continue to exert outsized influence on impressions in the context of a decision for which people view competence as the most important attribute? In three experiments, we used an impression updating paradigm to test how much information about a target's morality versus competence changed perceivers' impressions of a job candidate. Despite several pilot studies in which people said they would prioritize competence over morality when deciding to hire a potential employee, results of the main studies reveal that impressions changed more when people received new information about a target's immorality than about his incompetence. This moral primacy effect held both for global impressions and willingness to hire the target, but direct effects on evaluations of the target as an employee did not consistently emerge. When the new information about the target was positive, we did not reliably observe a moral primacy effect. These findings provide important insight on the generalizability of moral primacy in impression updating.

Highlights

• People reported that hiring decisions should favor competence over morality.

• Impressions of a job candidate changed more based on his morality (vs. competence).

• Moral primacy in this context emerged only when the new information was negative.

• Moral primacy occurred for general impressions more than hiring-specific judgments.

Conclusion

In sum, we tested the boundaries of moral primacy and found that even in a context where other dimensions could dominate, information about a job candidate's immorality continued to have disproportionate influence on general impressions of him and evaluations of his suitability as an employee. However, our findings further show that the relative effect of negative moral versus competence information on domain-specific judgments tended to be smaller than effects on general impressions. In addition, unlike prior research on impression updating (Brambilla et al., 2019), we observed no evidence for moral primacy in this context when the new information was positive (although this pattern may be indicative of a more general valence asymmetry in the effects of morally relevant information). Together, these findings provide an important extension of the Moral Primacy Model but also provide useful insight on the generalizability of the effect.

Monday, January 3, 2022

Systemic Considerations in Child Development and the Pursuit of Racial Equality in the United States

Perry, S., Skinner-Dorkenoo, A. L., 
Wages, J., & Abaied, J. L. (2021, October 8). 

Abstract

In this commentary on Lewis’ (2021) article in Psychological Inquiry, we expand on ways that both systemic and interpersonal contexts contribute to and uphold racial inequalities, with a particular focus on research on child development and socialization. We also discuss the potential roadblocks that may undermine the effectiveness of Lewis’ (2021) recommended strategy of relying on experts as a driving force for change. We conclude by proposing additional strategies for pursuing racial equality that may increase the impact of experts, such as starting anti-racist socialization early in development, family-level interventions, and teaching people about racial injustices and their connections to systemic racism.

From the Conclusion

Ultimately, the expert (Myrdal) concluded that the problem was White people and how they think about and structure society. Despite the immense popularity of his book among the American public and the fact that it did motivate some policy change (Brown v. Board of Education, Warren& Supreme Court of The United States, 1953), many of the same issues persist to this day. As such, we argue that, although relying on experts may be an appealing recommendation, history suggests that our efforts to reduce racial inequality in the U.S. will require substantial, widespread investment from White U.S. residents in order for real change to occur. Based on the literature reviewed here, significant barriers to such investment remain, many of which begin in early childhood. Beyond pursuing policies that promote structural equality on the advice of experts in ways that do not trigger backlash, we should support policies that educate the public—with a special emphasis on childhood socialization—on the history of systemic racism and the past and continued intentional efforts to create and maintain racial inequalities. 

Building upon recommendations offered by Lewis, we also argue that we need to move the societal bar from simply being non-racist, to being actively anti-racist. As a society, we need to recalibrate our norms, such that passively going along with systemic racism will no longer be acceptable (Tatum, 2017). In the summer of 2020, after the police killings of George Floyd and Breonna Taylor, many organizations released statements in support of the Black Lives Movement, confronting systemic racism, and increasing social justice (Nguyen, 2020). But one question that many posed was whether these organizations and institutions were genuinely committed to tackling systemic racism, or if their acts were performative (Duarte, 2020). If groups, organizations, and institutions want to claim that they are committed to anti-racism, then they should be held accountable for these claims and provide concrete evidence of their efforts to dismantle the pervasive system of racial oppression. In addition to this, we recommend a greater investment in educating the public on the history of systemic racism (particularly with children; such as the Ethnic Studies Model Curriculum implemented in the state of California), prompting White parents to actively be anti-racist and teach their children to do the same, and equitable structural policies that facilitate residential and school racial integration to increase quality interracial contact.

Sunday, January 2, 2022

Towards a Theory of Justice for Artificial Intelligence

Iason Gabriel
Forthcoming in Daedelus vol. 151, 
no. 2, Spring 2022

Abstract 

This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes -- something that requires specific attention be paid to the impact they have on the worst-off members of society.

Here is the conclusion:

Second, the demand for public justification in the context of AI deployment may well extend beyond the basic structure. As Langdon Winner argues, when the impact of a technology is sufficiently great, this fact is, by itself, sufficient to generate a free-standing requirement that citizens be consulted and given an opportunity to influence decisions.  Absent such a right, citizens would cede too much control over the future to private actors – something that sits in tension with the idea that they are free and equal. Against this claim, it might be objected that it extends the domain of political justification too far – in a way that risks crowding out room for private experimentation, exploration, and the development of projects by citizens and organizations. However, the objection rests upon the mistaken view that autonomy is promoted by restricting the scope of justificatory practices to as narrow a subject matter as possible. In reality this is not the case: what matters for individual liberty is that practices that have the potential to interfere with this freedom are appropriately regulated so that infractions do not come about. Understood in this way, the demand for public justification stands in opposition not to personal freedom but to forms of unjust imposition.

The call for justice in the context of AI is well-founded. Looked at through the lens of distributive justice, key principles that govern the fair organization of our social, political and economic institutions, also apply to AI systems that are embedded in these practices. One major consequence of this is that liberal and egalitarian norms of justice apply to AI tools and services across a range of contexts. When they are integrated into society’s basic structure, these technologies should support citizens’ basic liberties, promote fair equality of opportunity, and provide the greatest benefit to those who are worst-off. Moreover, deployments of AI outside of the basic structure must still be compatible with the institutions and values that justice requires. There will always be valid reasons, therefore, to consider the relationship of technology to justice when it comes to the deployment of AI systems.

Saturday, January 1, 2022

New billing disclosure requirements take effect in 2022

American Psychological Association
Originally published 10 December 21

All mental health providers will need to provide estimated costs of services before starting treatment.

Beginning January 1, 2022, psychologists and other health care providers will be required by law to give uninsured and self-pay patients a good faith estimate of costs for services that they offer, when scheduling care or when the patient requests an estimate.

This new requirement was finalized in regulations issued October 7, 2021. The regulations implement part of the “No Surprises Act,” enacted in December 2020 as part of a broad package of COVID- and spending-related legislation. The act aims to reduce the likelihood that patients may receive a “surprise” medical bill by requiring that providers inform patients of an expected charge for a service before the service is provided. The government will also soon issue regulations requiring psychologists to give good faith estimates to commercial or government insurers, when the patient has insurance and plans to use it.

Psychologists working in group practices or larger organizational settings and facilities will likely receive direction from their compliance department or lawyers on how to satisfy this new requirement.

Read on for answers to FAQs that apply to practicing psychologists who treat uninsured or self-pay patients.

What providers and what services are subject to this rule?
“Provider” is defined broadly to include any health care provider who is acting within the scope of the provider’s license or certification under applicable state law. Psychologists meet that definition. 

The definition of “items and services” for which the good faith estimate must be provided is also broadly defined to encompass “all encounters, procedures, medical tests, … provided or assessed in connection with the provision of health care.” Services related to mental health substance use disorders are specifically included.

What steps do I need to take and when?

Psychologists are ethically obligated to discuss fees with patients upfront. This new requirement builds on that by adding more structure and specific timeframes for action.


Note: Compliance is not optional.  This is a new, consumer protection, health-care law in the United States.

Friday, December 31, 2021

Dear White People: Here Are 5 Uncomfortable Truths Black Colleagues Need You To Know

Dana Brownlee
Forbes.com
Originally posted 16 June 2020

While no one has a precise prescription for how to eradicate racial injustice in the workplace, I firmly believe that a critical first step is embracing the difficult conversations and uncomfortable truths that we’ve become too accustomed to avoiding. The baseline uncomfortable truth is that blacks and whites in corporate America often maintain their own subcultures – including very different informal conversations in the workplace - with surprisingly little overlap at times. To be perfectly honest, as a black woman who has worked in and around corporate America for nearly 30 years, I’ve typically only been privy to the black side of the conversation, but I think in this moment where everyone is looking for opportunities to either teach, learn or grow, it’s instructive if not necessary to break down the traditional siloes and speak the unspeakable. So in this vein I’m sharing five critical “truths” that I feel many black people in corporate settings would vehemently discuss in “private” but not necessarily assert in “public.”

Here are the 5, plus a bonus.

Truth #1 - Racism doesn’t just show up in its most extreme form. There is indeed a continuum (of racist thoughts and behaviors), and you may be on it.

Truth #2 – Even if you personally haven’t offended anyone (that you know of), you may indeed be part of the problem.

Truth #3 – Every black person on your team is not your “friend.”

Truth #4 – Gender and race discrimination are not “essentially the same.”

Truth #5 – Even though there may be one or two black faces in leadership, your organization may indeed have a rampant racial injustice problem.

Bonus Truth #6: You can absolutely be part of the solution.

As workplaces tackle racism with a renewed sense of urgency amidst the worldwide Black Lives Matter protests, it’s imperative that they approach the problem of racism as they would any other serious business problem – methodically, intensely and with a sense of urgency and conviction.

Thursday, December 30, 2021

When Helping Is Risky: The Behavioral and Neurobiological Trade-off of Social and Risk Preferences

Gross, J., Faber, N. S., et al.  (2021).
Psychological Science, 32(11), 1842–1855.
https://doi.org/10.1177/09567976211015942

Abstract

Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual’s valuation of others’ well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.

From the Discussion

From a practical perspective, both methylphenidate (sold under the trade name Ritalin) and atomoxetine (sold under the trade name Strattera) are prescription drugs used to treat attention-deficit/hyperactivity disorder and are regularly used off-label by people who aim to enhance their cognitive performance (Maier et al., 2018). Thus, our results have implications for the ethics of and policy for the use of psychostimulants. Indeed, the Global Drug Survey taken in 2015 and 2017 revealed that 3.2% and 6.6% of respondents, respectively, reported using psychostimulants such as methylphenidate for cognitive enhancement (Maier et al., 2018). Both in the professional ethical debate as well as in the general public, concerns about the medical safety and the fairness of such cognitive enhancements are discussed (Faber et al., 2016). However, our finding that methylphenidate alters helping behavior through increased risk seeking demonstrates that substances aimed at changing cognitive functioning can also influence social behavior. Such “social” side effects of cognitive enhancement (whether deemed positive or negative) are currently unknown to both users and administrators and thus do not receive much attention in the societal debate about psychostimulant use (Faulmüller et al., 2013).

Wednesday, December 29, 2021

Delphi: Towards Machine Ethics and Norms

Jiang, L., et al. (2021). 
ArXiv, abs/2110.07574.

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state ("thou shalt not kill"), applying such rules to real-world situations is far more complex. For example, while "helping a friend" is generally a good thing to do, "helping a friend spread fake news" is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

Our paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present Commonsense Norm Bank, a moral textbook customized for machines, which compiles 1.7M examples of people's ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study provides new insights that lead to several important open research questions: differentiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

From the Conclusion

Delphi’s impressive performance on machine moral reasoning under diverse compositional real-life situations, highlights the importance of developing high-quality human-annotated datasets for people’s moral judgments. Finally, we demonstrate through systematic probing that Delphi still struggles with situations dependent on time or diverse cultures, and situations with social and demographic bias implications. We discuss the capabilities and limitations of Delphi throughout this paper and identify key directions in machine ethics for future work. We hope that our work opens up important avenues for future research in the emerging field of machine ethics, and we encourage collective efforts from our research community to tackle these research challenges.

Tuesday, December 28, 2021

Opt-out choice framing attenuates gender differences in the decision to compete in the laboratory and in the field

J. C. He, S. K. Kang, N. Lacetera
Proceedings of the National Academy of Sciences 
Oct 2021, 118 (42) e2108337118

Abstract

Research shows that women are less likely to enter competitions than men. This disparity may translate into a gender imbalance in holding leadership positions or ascending in organizations. We provide both laboratory and field experimental evidence that this difference can be attenuated with a default nudge—changing the choice to enter a competitive task from a default in which applicants must actively choose to compete to a default in which applicants are automatically enrolled in competition but can choose to opt out. Changing the default affects the perception of prevailing social norms about gender and competition as well as perceptions of the performance or ability threshold at which to apply. We do not find associated negative effects for performance or wellbeing. These results suggest that organizations could make use of opt-out promotion schemes to reduce the gender gap in competition and support the ascension of women to leadership positions.

Significance

How can we close the gender gap in high-level positions in organizations? Interventions such as unconscious bias training or the “lean in” approach have been largely ineffective. This article suggests, and experimentally tests, a “nudge” intervention, altering the choice architecture around the decision to apply for top positions from an “opt in” to an “opt out” default. Evidence from the laboratory and the field shows that a choice architecture in which applicants must opt out from competition reduces gender differences in competition. Opt-out framing thus seems to remove some of the bias inherent in current promotion systems, which favor those who are overconfident or like to compete. Importantly, we show that such an intervention is feasible and effective in the field.

From the Discussion

A practical implication of our studies is that organizations could attenuate the gender gap in competitions by moving from a default, in which applicants must opt in to apply, to a default whereby those who pass a performance and qualification threshold are automatically considered but can choose to opt out. Examples include promotions in organizations, participation into start-up pitch competitions, and innovation or creativity contests. Future work could examine similar interventions that circumvent the self-nomination aspect of opt-in schemes for competitive selection processes. For instance, rather than self-nomination, peer-nomination could attenuate the gender gap. The results of Study 2 also suggest that manipulating or nudging social norms could result in a similar effect.

Monday, December 27, 2021

An interaction effect of norm violations on causal judgment

Gill, M., Kominsky, J. F., 
Icard, T., & Knobe, J. (2021, October 19).

Abstract

Existing research has shown that norm violations influence causal judgments, and a number of different models have been developed to explain these effects. One such model, the necessity/sufficiency model, predicts an interaction pattern in people's judgments. Specifically, it predicts that when people are judging the degree to which a particular factor is a cause, there should be an interaction between (a) the degree to which that factor violates a norm and (b) the degree to which another factor in the situation violates norms. A study of moral norms (N = 1000) and norms of proper functioning (N = 3000) revealed robust evidence for the predicted interaction effect. The implications of these patterns for existing theories of causal judgments is discussed.

General discussion

Two experiments revealed a novel interaction effect of norm violations on causal judgment. First, the experiments replicated two basic phenomena: a focal event is rated as more causal when it is bad (“inflation”) and a focal event is rated less causal when the alternative event is bad (“supersession”). Critically, the experiments showed that (1) the difference in causal ratings of the focal event when it is good vs. bad increases when the alternative event is bad (“inflation increase”) and (2) the difference in causal ratings of the focal event when the alternative event is bad vs.good decreases when the focal event is bad (“supersession decrease”).  

Experiment 1 yielded this novel interaction effect in the context of moral norm violations (e.g., stealing a book from the library). Experiment 2 showed that the effect generalized to violations of norms of proper functioning (e.g., a part of a machine working incorrectly).

This interaction pattern is predicted by the necessity/sufficiency model (Icard et al.,2017). The success of this prediction is especially striking, in that the necessity/sufficiency model was not created with this interaction in mind. Rather, the model was originally created to explain inflation and supersession, and it was only noticed later that this model predicts an interaction in cases of this type.