Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, October 1, 2021

The prisoner’s dilemma: The role of medical professionals in executions

Elisabeth Armstrong
Journal of Medical Ethics
Originally posted 7 Sept 21

Here is an excerpt:

Clinician Participation in Executions is Either Wrong or Misguided

Clinicians might participate in executions out of an inappropriate commitment to capital punishment; this position of leveraging medical education and credentials to punish or harm has no grounding in ethical conversation. It is entirely inappropriate to undermine trust in the medical profession in service of one’s political or philosophical beliefs – those ought to be relegated to the voting booths.

However, some practitioners might be present at an execution out of a well-intentioned, but misguided commitment to preventing suffering. Their reasoning is along the lines, “If states are proceeding with an execution, shouldn’t a clinician be present to ensure there is no undue harm or suffering?” Writing on lethal injections, Dr. Sandeep Jahaur writes in the New York Times, “Barring physicians from executions will only increase the risk that prisoners will unduly suffer,” in violation of the Hippocratic Oath and the 8th Amendment of the US Constitution. He points out that no ethics board would allow the testing of execution drugs on human participants, therefore, in the absence of a “controlled investigation” it is important that a doctor is present to assist when things go awry.

Dr. Jahaur adds that if doctors (or other clinicians) do not assist, people with less experience are often called upon to insert catheters, assess and insert the IVs, mix and administer the drugs, monitor a patient’s vital signs, then confirm death; and of course, step in if anything goes wrong. Dr. Atul Gawande agrees that it is unlikely that a lethal injection could be performed without a physician without the occasional tragic mistake. As recently as October of 2014, the lack of involvement from clinicians resulted in the administration of an incorrect drug to an inmate – resulting in forty-three minutes of writhing and groaning before he died.

The Case for Ending Practitioner Participation

There is no denying that these cases of suffering are disturbing and compelling. Ultimately, however, the bioethical case for participation is grossly outweighed by the case against it: medical involvement on any level intrinsically violates the ethical principles of autonomy, beneficence, non-maleficence, and justice – compromising the foundations of the medical system. (Underline added.)

Thursday, September 30, 2021

Generosity pays: Selfish people have fewer children and earn less money

Eriksson, K., Vartanova, I., et al. (2020)
Journal of Personality and Social Psychology, 
118(3), 532–544. 
https://doi.org/10.1037/pspp0000213

Abstract

Does selfishness pay in the long term? Previous research has indicated that being prosocial (or otherish) rather than selfish has positive consequences for psychological well-being, physical health, and relationships. Here we instead examine the consequences for individuals’ incomes and number of children, as these are the currencies that matter most in theories that emphasize the power of self-interest, namely economics and evolutionary thinking. Drawing on both cross-sectional (Studies 1 and 2) and panel data (Studies 3 and 4), we find that prosocial individuals tend to have more children and higher income than selfish individuals. An additional survey (Study 5) of lay beliefs about how self-interest impacts income and fertility suggests one reason selfish people may persist in their behavior even though it leads to poorer outcomes: people generally expect selfish individuals to have higher incomes. Our findings have implications for lay decisions about the allocation of scarce resources, as well as for economic and evolutionary theories of human behavior.

From the General Discussion

Our findings also speak to theories of the evolutionary history of otherishness in humans. It is often assumed that evolution promotes selfishness unless group selection acts as a counter-force (Sober & Wilson, 1999), possibly combined with a punishment mechanism to offset the advantage of being selfish (Henrich & Boyd, 2001). The finding that otherishness is associated with greater fertility within populations indicates that selfishness is not necessarily advantageous in the first place. Our datasets are limited to Europe and the United States, but if the mechanisms we sketched above are correct then we should also expect a similarly positive effect of otherishness on fertility in other parts of the world.

Our results paint a more complex picture for income, compared to fertility. Whereas otherish people tended to show the largest increases in incomes over time, the majority of our studies indicated that the highest absolute levels of income were associated with moderate otherishness. There are several ways in which otherishness may influence income levels and income trajectories. As noted earlier, otherish people tend to have stronger relations and social networks, and social networks are a key source of information about job opportunities (Granovetter, 1995).

Wednesday, September 29, 2021

A new framework for the psychology of norms

Westra, E., & Andrews, K. (2021, July 9).

Abstract

Social Norms – rules that dictate which behaviors are appropriate, permissible, or obligatory in different situations for members of a given community – permeate all aspects of human life. Many researchers have sought to explain the ubiquity of social norms in human life in terms of the psychological mechanisms underlying their acquisition, conformity, and enforcement. Existing theories of the psychology of social norms appeal to a variety of constructs, from prediction-error minimization, to reinforcement learning, to shared intentionality, to evolved psychological adaptations. However, most of these accounts share what we call the psychological unity assumption, which holds that there is something psychologically distinctive about social norms, and that social norm adherence is driven by a single system or process. We argue that this assumption is mistaken. In this paper, we propose a methodological and conceptual framework for the cognitive science of social norms that we call normative pluralism. According to this framework, we should treat norms first and foremost as a community-level pattern of social behavior that might be realized by a variety of different cognitive, motivational, and ecological mechanisms. Norm psychologists should not presuppose that social norms are underpinned by a unified set of processes, nor that there is anything particularly distinctive about normative cognition as such. We argue that this pluralistic approach offers a methodologically sound point of departure for a fruitful and rigorous science of norms.

Conclusion

The central thesis of this paper –what we’ve called normative pluralism–is that we should not take the psychological unity of social norms for granted.Social norms might be underpinned by a domain-specific norm system or by a single type of cognitive process, but they might also be the product of many different processes. In our methodological proposal, we outlined a novel, non-psychological conception of social norms –what we’ve called normative regularities –and defined the core components of a psychology of norms in light of this construct. In our empirical proposal, we argued that thus defined, social norms emerge from a heterogeneous set of cognitive, affective, and ecological mechanisms.

Thinking about social norms in this way will undoubtedly make the cognitive science of norms more complex and messy. If we are correct, however, then this will simply be a reflection of the complexity and messiness of social norms themselves. Taking a pluralistic approach to social norms allows us to explore the potential variability inherent to norm-governed behavior, which can help us to better understand how social norms shape our lives, and how they manifest themselves throughout the natural world.

Tuesday, September 28, 2021

Moral Injury During the CDOVID-19 Pandemic

Borges LM, Barnes SM,  et al. 
Psychol Trauma. 2020 Aug;12(S1):S138-S140. 
doi: 10.1037/tra0000698. Epub 2020 Jun 4. PMID: 32496101.

Here is an excerpt:

Moral injury in COVID-19 may be related to, but is distinct from: 1) burnout, 2) adjustment disorders, 3)
depression, 4) traumatic stress/PTSD, 5) moral injury in the military, and 6) moral distress. Moral injury
may be a contributing factor to burnout, adjustment disorders, or depression, but they are not equivalent. The diagnosis of PTSD requires a qualifying exposure to a traumatic stressor, whereas experiencing a moral injury does not. Moral injury in the military has been addressed in a different population and particularly after deployment, and its lessons may not be generalizable to moral injury during COVID-19, which we are seeing acutely among healthcare workers. Finally, moral distress may be a precursor to moral injury, but the terms are not interchangeable. Previous literature has noted that moral distress signals a need for systemic change because it is generated by systemic issues. Thus, moral distress can serve as a guide for healthcare improvement, and rapid systemic interventions to address moral distress may help to prevent and mitigate the impact of moral injury.

While not a mental disorder itself, moral injury undermines core capacities for well-being, including a
sense of ongoing value-laden actions, competence to face and meet challenges, and feelings of belonging and meaning. Moral injury is associated with strong feelings of shame and guilt and with intense self-condemnation and a shattered core sense of self. Clinical observations suggest that uncertainty in decision-making may increase the likelihood or intensity of moral injury.

In the context of a public health disaster such as the COVID-19 pandemic, acknowledgement of the need
to transition from ordinary standards of care to crisis standards of care can be both necessary and helpful to 1) provide a framework upon which to make difficult and ethically fraught decisions and 2) alleviate some of moral distress and indeed moral injury that may otherwise be experienced in the absence of such guidance. The pandemic forces us to confront challenging questions for which there are no clear answers, and to make “lose-lose” choices in which no one involved ends up feeling satisfied or even comfortable. 

Monday, September 27, 2021

An African Theory of Moral Status: A Relational Alternative to Individualism and Holism.

Metz, T. (2012).
Ethic Theory Moral Prac 15, 387–402. 
https://doi.org/10.1007/s10677-011-9302-y

Abstract

The dominant conceptions of moral status in the English-speaking literature are either holist or individualist, neither of which accounts well for widespread judgments that: animals and humans both have moral status that is of the same kind but different in degree; even a severely mentally incapacitated human being has a greater moral status than an animal with identical internal properties; and a newborn infant has a greater moral status than a mid-to-late stage foetus. Holists accord no moral status to any of these beings, assigning it only to groups to which they belong, while individualists such as welfarists grant an equal moral status to humans and many animals, and Kantians accord no moral status either to animals or severely mentally incapacitated humans. I argue that an underexplored, modal-relational perspective does a better job of accounting for degrees of moral status. According to modal-relationalism, something has moral status insofar as it capable of having a certain causal or intensional connection with another being. I articulate a novel instance of modal-relationalism grounded in salient sub-Saharan moral views, roughly according to which the greater a being's capacity to be part of a communal relationship with us, the greater its moral status. I then demonstrate that this new, African-based theory entails and plausibly explains the above judgments, among others, in a unified way.

From the end of the article:

Those deeply committed to holism and individualism, or even a combination of them, may well not be convinced by this discussion. Diehard holists will reject the idea that anything other than a group can ground moral status, while pure individualists will reject the recurrent suggestion that two beings that are internally identical (foetus v neonate, severely mentally incapacitated human v animal) could differ in their moral status. However, my aim has not been to convince anyone to change her mind, or even to provide a complete justification for doing so. My goals have instead been the more limited ones of articulating a new, modal-relational account of moral status grounded in sub-Saharan moral philosophy, demonstrating that it avoids the severe parochialism facing existing relational accounts, and showing that it accounts better than standard Western theories for a variety of widely shared intuitions about what has moral status and to what degree. Many of these intuitions are captured by neither holism nor individualism and have lacked a firm philosophical foundation up to now. Of importance here is the African theory’s promise to underwrite the ideas that humans and animals have a moral status grounded in the same property that differs in degree, that severely mentally incapacitated humans have a greater moral status than animals with the same internal properties, and that a human’s moral status increases as it develops from the embryonic to foetal to neo-natal stages.

Sunday, September 26, 2021

Better the Two Devils You Know, Than the One You Don’t: Predictability Influences Moral Judgments of Immoral Actors

Walker, A. C.,  et al. 
(2020, March 24).

Abstract

Across six studies (N = 2,646), we demonstrate the role that perceptions of predictability play in judgments of moral character, finding that people demonstrate a moral preference for more predictable immoral actors. Participants judged agents performing an immoral action (e.g., assault) for an unintelligible reason as less predictable and less moral than agents performing the same immoral action, along with an additional immoral action (e.g., theft), for a well-understood immoral reason (Studies 1-4). Additionally, agents performing an immoral action for an unintelligible reason were judged as less predictable and less moral compared to agents performing the same immoral act for an unstated reason (Studies 3-5). This moral preference persisted when participants viewed video footage of each agent’s immoral action (Study 5). Finally, agents performing immoral actions in an unusual way were judged as less predictable and less moral than those performing the same actions in a more common manner (Study 6). The present research demonstrates how immoral actions performed without a clear motive or in an unpredictable way are perceived to be especially indicative of poor moral character. In revealing peoples’ moral preference for predictable immoral actors, we propose that perceptions of predictability play an important, yet overlooked, role in judgments of moral character. Furthermore, we propose that predictability influences judgments of moral character for its ultimate role in reducing social uncertainty and facilitating cooperation with trustworthy individuals and discuss how these findings may be accommodated by person-centered theories of moral judgment and theories of morality-as-cooperation.

From the Discussion

From traditional act-based perspectives (e.g., deontology and utilitarianism; Kant, 1785/1959; Mill, 1861/1998) this moral preference may appear puzzling, as participants judged actors causing more harm and violating more moral rules as more moral. Nevertheless, recent work suggests that people view actions not as the endpoint of moral evaluation, but as a source of information for assessing the moral character of those who perform them(Tannenbaum et al., 2011; Uhlmannet al., 2013). Fromthis person-centered perspective(Pizarro & Tannenbaum, 2011; Uhlmann et al., 2015), a moral preference for more predictable immoral actors can be understood as participants judging the same immoral action (e.g., assault) as more indicative of negative character traits (e.g., a lack of empathy)when performed without an intelligible motive. That is, a person assaulting a stranger seemingly without reason or in an unusual manner (e.g., with a frozen fish) may be viewed as a more inherently unstable, violent, and immoral person compared to an individual performing an identical assault for a well-understood reason (e.g., to escape punishment for a crime in-progress). Such negative character assessments may lead unpredictable immoral actors to be considered a greater risk for causing future harms of uncertain severity to potentially random victims. Consistent with these claims, past work has shown that people judge those performing harmless-but-offensive acts (e.g., masturbating inside a dead chicken), as not only possessing more negative character traits compared to others performing more harmful acts (e.g., theft), but also as likely to engage in more harmful actions in the future(Chakroff et al., 2017; Uhlmann& Zhu, 2014).

Saturday, September 25, 2021

The prefrontal cortex and (uniquely) human cooperation: a comparative perspective

Zoh, Y., Chang, S.W.C. & Crockett, M.J.
Neuropsychopharmacol. (2021). 

Abstract

Humans have an exceptional ability to cooperate relative to many other species. We review the neural mechanisms supporting human cooperation, focusing on the prefrontal cortex. One key feature of human social life is the prevalence of cooperative norms that guide social behavior and prescribe punishment for noncompliance. Taking a comparative approach, we consider shared and unique aspects of cooperative behaviors in humans relative to nonhuman primates, as well as divergences in brain structure that might support uniquely human aspects of cooperation. We highlight a medial prefrontal network common to nonhuman primates and humans supporting a foundational process in cooperative decision-making: valuing outcomes for oneself and others. This medial prefrontal network interacts with lateral prefrontal areas that are thought to represent cooperative norms and modulate value representations to guide behavior appropriate to the local social context. Finally, we propose that more recently evolved anterior regions of prefrontal cortex play a role in arbitrating between cooperative norms across social contexts, and suggest how future research might fruitfully examine the neural basis of norm arbitration.

Conclusion

The prefrontal cortex, in particular its more anterior regions, has expanded dramatically over the course of human evolution. In tandem, the scale and scope of human cooperation has dramatically outpaced its counterparts in nonhuman primate species, manifesting as complex systems of moral codes that guide normative behaviors even in the absence of punishment or repeated interactions. Here, we provided a selective review of the neural basis of human cooperation, taking a comparative approach to identify the brain systems and social behaviors that are thought to be unique to humans. Humans and nonhuman primates alike cooperate on the basis of kinship and reciprocity, but humans are unique in their abilities to represent shared goals and self-regulate to comply with and enforce cooperative norms on a broad scale. We highlight three prefrontal networks that contribute to cooperative behavior in humans: a medial prefrontal network, common to humans and nonhuman primates, that values outcomes for self and others; a lateral prefrontal network that guides cooperative goal pursuit by modulating value representations in the context of local norms; and an anterior prefrontal network that we propose serves uniquely human abilities to reflect on one’s own behavior, commit to shared social contracts, and arbitrate between cooperative norms across diverse social contexts. We suggest future avenues for investigating cooperative norm arbitration and how it is implemented in prefrontal networks.

Friday, September 24, 2021

Hanlon’s Razor

N. Ballantyne and P. H. Ditto
Midwest Studies in Philosophy
August 2021

Abstract

“Never attribute to malice that which is adequately explained by stupidity” – so says Hanlon’s Razor. This principle is designed to curb the human tendency toward explaining other people’s behavior by moralizing it. In this article, we ask whether Hanlon’s Razor is good or bad advice. After offering a nuanced interpretation of the principle, we critically evaluate two strategies purporting to show it is good advice. Our discussion highlights important, unsettled questions about an idea that has the potential to infuse greater humility and civility into discourse and debate.

From the Conclusion

Is Hanlon’s Razor good or bad advice? In this essay, we criticized two proposals in favor of the Razor.  One sees the benefits of the principle in terms of making us more accurate. The other sees benefits in terms of making us more charitable. Our discussion has been preliminary, but we hope careful empirical investigation can illuminate when and why the Razor is beneficial, if it is. For the time being, what else can we say about the Razor?

The Razor attempts to address the problem of detecting facts that explain opponents’ mistakes. Why do our opponents screw up? For hypermoralists, detecting stupidity in the noise of malice can be difficult: we are too eager to attribute bad motives and unsavory character to people who disagree with us. When we try to explain their mistakes, we are subject to two distinct errors:

Misidentifying-stupidity error: attributing an error to malice that is due to stupidity

Misidentifying-malice error: attributing an error to stupidity that is due to malice 

The idea driving the Razor is simple enough. People make misidentifying-stupidity errors too frequently and they should minimize those errors by risking misidentifying-malice errors. The Razor attempts to adjust our criterion for detecting the source of opponents’ mistakes. People should see stupidity more often in their opponents, even if that means they sometimes see stupidity where there is in fact malice. 

Thursday, September 23, 2021

The Execution Hypothesis for the Evolution of a Morality of Fairness

R. Wrangham
Ethics & Politics
XXIII, 2021, 261-282

Abstract

Humans are both the only species known to have a morality of fairness, and the only species in which the social hierarchy is headed by an alliance (a ‘reverse dominance hierarchy’). I present evidence in support of the argument by Boehm (1999, 2012) that these two features are causally linked. The reverse dominance hierarchy is detectable in the fossil record around 300,000 years ago with the
origin of Homo sapiens. From then onwards, according to the execution hypothesis, an alliance of adult males held the power of life and death over all members of the social group, and they used this power to advance their interests. The result was an intense selective pressure against antisocial behaviour and in favour of prosociality, cooperation and conformity to group norms, whether the norms were beneficial for the group as a whole or merely for the male alliance. The execution hypothesis thus argues that group dynamics have operated for at least 12,000 generations to favour the evolution of moral emotions, many of which are designed to protect individuals from the threat of severe punishment or death at the hands of a dominant alliance of males. 

(cut)

The Persistent Importance of Moral Enforcement

Ever since Durkheim (1902), hunter-gatherers and others living in small-scale, acephalous bands have been known to live by a set of norms that categorize numerous behaviours as right or wrong.  Morally circumscribed behaviors concern food, sharing, sexuality, marriage partners, emotional expression, disrespect, secret societies and much else, and are the topic of much daily conversation. To judge from one detailed study of Ju/’hoansi Bushmen hunter-gatherers, moral enforcement comes more from punishment than reward, with males being sanctioned more than females (Wiessner, 2005).