Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, February 14, 2023

Helping the ingroup versus harming the outgroup: Evidence from morality-based groups

Grigoryan, L, Seo, S, Simunovic, D, & Hoffman, W.
Journal of Experimental Social Psychology
Volume 105, March 2023, 104436

Abstract

The discrepancy between ingroup favoritism and outgroup hostility is well established in social psychology. Under which conditions does “ingroup love” turn into “outgroup hate”? Studies with natural groups suggest that when group membership is based on (dis)similarity of moral beliefs, people are willing to not only help the ingroup, but also harm the outgroup. The key limitation of these studies is that the use of natural groups confounds the effects of shared morality with the history of intergroup relations. We tested the effect of morality-based group membership on intergroup behavior using artificial groups that help disentangling these effects. We used the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game which differentiates between behavioral options of weak parochialism (helping the ingroup), strong parochialism (harming the outgroup), universal cooperation (helping both groups), and egoism (profiting individually). In three preregistered experiments, we find that morality-based groups exhibit less egoism and more universal cooperation than non-morality-based groups. We also find some evidence of stronger ingroup favoritism in morality-based groups, but no evidence of stronger outgroup hostility. Stronger ingroup favoritism in morality-based groups is driven by expectations from the ingroup, but not the outgroup. These findings contradict earlier evidence from natural groups and suggest that (dis)similarity of moral beliefs is not sufficient to cross the boundary between “ingroup love” and “outgroup hate”.

General discussion

When does “ingroup love” turn into “outgroup hate”? Previous studies conducted on natural groups suggest that centrality of morality to the group’s identity is one such condition: morality-based groups showed more hostility towards outgroups than non-morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). We set out to test this hypothesis in a minimal group setting, using the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game.  Across three pre-registered studies, we found no evidence that morality-based groups show more hostility towards outgroups than non-morality-based groups. Instead, morality-based groups exhibited less egoism and more universal cooperation (helping both the ingroup and the outgroup) than non-morality-based groups. This finding is consistent with earlier research showing that salience of morality makes people more cooperative (Capraro et al., 2019). Importantly, our morality manipulation was not specific to any pro-cooperation moralnorm. Simply asking participants to think about the criteria they use to judge what is right and what is wrong was enough to increase universal cooperation.

Our findings are inconsistent with research showing stronger outgroup hostility in morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). The key difference between the set of studies presented here and the earlier studies that find outgroup hostility in morality-based groups is the use of natural groups in the latter. What potential confounding variables might account for the emergence of outgroup hostility in natural groups?

Monday, February 13, 2023

Belief in Persistent Moral Decline

West, B., & Pizarro, D. A. (2022, June 27).
https://doi.org/10.31234/osf.io/9swjb

Abstract

Across four studies (3 experimental, total n = 199; 1 archival, n = 186,000) we provide evidence that people hold the belief that the world is growing morally worse, and that this belief is consistent across generational, political, and religious lines. When asked directly about which aspects of society are getting better and which are getting worse, people are more likely to list the moral (compared to non-moral) aspects as getting worse (Studies 1-2). When provided with a list of items that are either moral or non-moral, people are more likely to report that moral (compared to non-moral) items are worsening (Study 3). Finally, when asked the question “What is the most important problem facing America today?” participants in a nationally representative survey (Heffington et al., 2019), disproportionately listed problems that fall within the moral domain (Study 4).

General Discussion

We found consistent and strong evidence that people think of social decline in more moral terms than they do social improvement (see Figure1). Participants in our studies consistently listed more morally relevant items (Studies 1-2) when asked what they thought has gotten worse in society compared to what has gotten better.Participants also categorized items pre-coded for moral relevance as declining more frequently than improving (Study 3). Study 4 provided further evidence for our hypothesis that those things people think are problems in society tend to be morally relevant. The majority of the “most important problem[s]” facing America from1939-2015 were issues relevant to moral values.

These findings provide evidence that in general, people tend to believe that our moral values are getting worse over time. We propose that this moral pessimism may serve a functional purpose. Moral values help bind us together and facilitate social cohesion (Graham et al.,2009), cooperation, and the strengthening of ingroup bonds(Curry,2016; Curry et al.,2019). Concern about declining morality (believing that morally relevant things have gotten worse in society over time) could be viewed as concern for maintaining those values that help keep society intact and functioning healthily. To “rest on our laurels” when it comes to being vigilant for moral decline may be unappealing, and people who try to claim that we are doing great, morally speaking, may be viewed as suspect, or not caring as much about our moral values.

Sunday, February 12, 2023

The scientific study of consciousness cannot, and should not, be morally neutral

Mazor, M., Brown, S., et al. (2021, November 12). 
Perspectives on psychological science.
Advance online publication.

Abstract

A target question for the scientific study of consciousness is how dimensions of consciousness, such as the ability to feel pain and pleasure or reflect on one’s own experience, vary in different states and animal species. Considering the tight link between consciousness and moral status, answers to these questions have implications for law and ethics. Here we point out that given this link, the scientific community studying consciousness may face implicit pressure to carry out certain research programmes or interpret results in ways that justify current norms rather than challenge them. We show that since consciousness largely determines moral status, the use of non-human animals in the scientific study of consciousness introduces a direct conflict between scientific relevance and ethics – the more scientifically valuable an animal model is for studying consciousness, the more difficult it becomes to ethically justify compromises to its well-being for consciousness research. Lastly, in light of these considerations, we call for a discussion of the immediate ethical corollaries of the body of knowledge that has accumulated, and for a more explicit consideration of the role of ideology and ethics in the scientific study of consciousness.

Here is how the article ends:

Finally, we believe consciousness researchers, including those working only with consenting humans, should take an active role in the ethical discussion about these issues, including the use of animal models for the study of consciousness. Studying consciousness, the field has the responsibility of leading the way on these ethical questions and of making strong statements when such statements are justified by empirical findings. Recent examples include discussions of ethical ramifications of neuronal signs of fetal consciousness (Lagercrantz, 2014) and a consolidation of evidence for consciousness in vertebrate animals, with a focus on livestock species, ordered by the European Food and Safety Authority (Le Neindre et al., 2017). In these cases, the science of consciousness provided empirical evidence to weigh on whether a fetus or a livestock animal is conscious. The question of animal models of consciousness is simpler because the presence of consciousness is a prerequisite for the model to be valid. Here, researchers can skip the difficult question of whether the entity is indeed conscious and directly ask, “Do we believe that consciousness, or some specific form or dimension of consciousness, entails moral status?”

It is useful to remind ourselves that ethical beliefs and practices are dynamic: Things that were considered
acceptable in the past are no longer acceptable today.  A relatively recent change is that to the status of nonhuman great apes (gorillas, bonobos, chimpanzees, and orangutans) such that research on great apes is banned in some countries today, including all European Union member states and New Zealand. In these countries, drilling a hole in chimpanzees’ heads, keeping them in isolation, or restricting their access to drinking water are forbidden by law. It is a fundamental question of the utmost importance which differences between animals make some practices acceptable with respect to some animals and not others. If consciousness is a determinant of moral status, consciousness researchers have a responsibility in taking an active part in this discussion—by providing scientific observations that either justify current ethical standards or induce the scientific and legal communities to revise these standards.

Saturday, February 11, 2023

Countertransference awareness and treatment outcome

Abargil, M., & Tishby, O. (2022). 
Journal of counseling psychology,
69(5), 667–677.
https://doi.org/10.1037/cou0000620

Abstract

Countertransference (CT) is considered a central component in the therapy process. Research has shown that CT management does not reduce the number of CT manifestations in therapy, but it leads to better therapy outcomes. In this study, we examined therapists' awareness of their CT using a structured interview. Our hypotheses were (a) treatments in which therapists were more aware of their CT would have a better outcome and (b) different definitions of CT would be related to different therapy outcomes. Twenty-nine patients were treated by 19 therapists in 16 sessions of short-term psychodynamic therapy. We used the core conflictual relationship theme to measure CT, a special interview was developed to study CT awareness. Results show that awareness of CT defined as the relationship with the patient moderated 10 outcome measures and awareness of CT defined as the relationship with the patient that repeats therapist conflicts with significant others moderated three outcome measures We present examples from dyads in this study and discuss how awareness can help the therapist talk to and handle patient challenges.

From the Discussion section

Increased therapist awareness of CT facilitate improvement in patient symptoms, emotion regulation and affiliation in relationships. Since awareness is an integral part of CT management, these findings are consistent with Hayes’ results from 2018 regarding the importance of CT management and its contribution to treatment outcome. Moreover, therapist’s self-awareness was found to be important in treating minorities (Baker, 1999). This study expands the ecological validity of therapist awareness and shows that the therapists’ awareness of their own wishes in therapy, as well as his perception of himself and the patient, is relevant to the general population as well. Thus, therapists of all theoretical orientations are encouraged to attend to their personal conflicts and to monitor their reactions to patients as a routine part of effective clinical practice. Moreover, therapist awareness has been found in the past to lead to less therapist self-confidence, but to better treatment outcomes (Williams, 2008). Our clinical examples illustrate these findings (the therapist who had high awareness showed much more self- doubt) and the results of multilevel regression analysis demonstrate better improvement for patients whose therapists were highly aware. Interestingly, the IIP control dimension was not found to be related to the therapist’s awareness of CT. It may be that since this dimension relates to the patient’s control need, the awareness of transference is more important. Another possibility is that the patient’s experience of the therapist as “knowing” may actually increase his control needs. Moreover, regarding patient main TC, we only found a trend and not a significant interaction. One reason may be the sample size. Another explanation is that patients do not necessarily link the changes in their lives to the relationship with the therapist and the insights associated with it. Thus, although awareness of CT helps to improve other outcome measures, it is not related to the way patients feel about the reason they sought out treatment.

A recent study of CT found that negative types of CT were correlated with more ruptures and less repair in the alliance. For positive CT the picture is more complex; Positive patterns predicted resolution when the therapists repeated positive patterns with par- ents but predicted ruptures when they tried to “repair” negative patterns with the parents (Tishby & Wiseman, 2020). The authors suggest that awareness of CT will help the therapist pay more attention to ruptures during treatment so they can address it and initiate resolutions processes. Our findings support the authors’ suggestion. The clinical example demonstrates that when the therapist was aware of negative CT and was able to talk about it in the awareness interview, he was also able to address the difficult feelings that arose during a session with the patient. Moreover, the treatment outcomes in these treatments were better which characterizes treatments with proper repair processes.

Friday, February 10, 2023

Individual differences in (dis)honesty are represented in the brain's functional connectivity at rest

Speer, S. P., Smidts, A., & Boksem, M. A. (2022).
NeuroImage, 246, 118761.
https://doi.org/10.1016/j.neuroimage.2021.118761

Abstract

Measurement of the determinants of socially undesirable behaviors, such as dishonesty, are complicated and obscured by social desirability biases. To circumvent these biases, we used connectome-based predictive modeling (CPM) on resting state functional connectivity patterns in combination with a novel task which inconspicuously measures voluntary cheating to gain access to the neurocognitive determinants of (dis)honesty. Specifically, we investigated whether task-independent neural patterns within the brain at rest could be used to predict a propensity for (dis)honest behavior. Our analyses revealed that functional connectivity, especially between brain networks linked to self-referential thinking (vmPFC, temporal poles, and PCC) and reward processing (caudate nucleus), reliably correlates, in an independent sample, with participants’ propensity to cheat. Participants who cheated the most also scored highest on several self-report measures of impulsivity which underscores the generalizability of our results. Notably, when comparing neural and self-report measures, the neural measures were found to be more important in predicting cheating propensity.

Significance statement

Dishonesty pervades all aspects of life and causes enormous economic losses. However, because the underlying mechanisms of socially undesirable behaviors are difficult to measure, the neurocognitive determinants of individual differences in dishonesty largely remain unknown. Here, we apply machine-learning methods to stable patterns of neural connectivity to investigate how dispositions toward (dis)honesty, measured by an innovative behavioral task, are encoded in the brain. We found that stronger connectivity between brain regions associated with self-referential thinking and reward are predictive of the propensity to be honest. The high predictive accuracy of our machine-learning models, combined with the reliable nature of resting-state functional connectivity, which is uncontaminated by the social-desirability biases to which self-report measures are susceptible, provides an excellent avenue for the development of useful neuroimaging-based biomarkers of socially undesirable behaviors.

Discussion

Employing connectome-based predictive modeling (CPM) in combination with the innovative Spot-The-Differences task, which allows for inconspicuously measuring cheating, we identified a functional connectome that reliably predicts a disposition toward (dis)honesty in an independent sample. We observed a Pearson correlation between out-of-sample predicted and actual cheatcount (r = 0.40) that resides on the higher side of the typical range of correlations (between r = 0.2 and r = 0.5) reported in previous studies employing CPM (Shen et al., 2017). Thus, functional connectivity within the brain at rest predicts whether someone is more honest or more inclined to cheat in our task.

In light of previous research on moral decisions, the regions we identified in our resting state analysis can be associated with two networks frequently found to be involved in moral decision making. First, the vmPFC, the bilateral temporal poles and the PCC have consistently been associated with self-referential thinking. For example, it has been found that functional connectivity between these areas during rest is associated with higher-level metacognitive operations such as self-reflection, introspection and self-awareness (Gusnard et al., 2001; Meffert et al., 2013; Northoff et al., 2006; Vanhaudenhuyse et al., 2011). Secondly, the caudate nucleus, which has been found to be involved in anticipation and valuation of rewards (Ballard and Knutson, 2009; Knutson et al., 2001) can be considered an important node in the reward network (Bartra et al., 2013). Participants with higher levels of activation in the reward network, in anticipation of rewards, have previously been found to indeed be more dishonest (Abe and Greene, 2014).

Thursday, February 9, 2023

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Sætra, H.S., Danaher, J. 
Philos. Technol. 35, 93 (2022).
https://doi.org/10.1007/s13347-022-00591-7

Abstract

Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

From the Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

Wednesday, February 8, 2023

AI in the hands of imperfect users

Kostick-Quenet, K.M., Gerke, S. 
npj Digit. Med. 5, 197 (2022). 
https://doi.org/10.1038/s41746-022-00737-z

Abstract

As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML’s human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.

(cut)

Impacts of uncertainty and urgency on decision quality

Trust plays a particularly critical role when decisions are made in contexts of uncertainty. Uncertainty, of course, is a central feature of most clinical decision making, particularly for conditions (e.g., COVID-1930) or treatments (e.g., deep brain stimulation or gene therapies) that lack a long history of observed outcomes. As Wang and Busemeyer (2021) describe, “uncertain” choice situations can be distinguished from “risky” ones in that risky decisions have a range of outcomes with known odds or probabilities. If you flip a coin, we know we have a 50% chance to land on heads. However, to bet on heads comes with a high level of risk, specifically, a 50% chance of losing. Uncertain decision-making scenarios, on the other hand, have no well-known or agreed-upon outcome probabilities. This also makes uncertain decision making contexts risky, but those risks are not sufficiently known to the extent that permits rational decision making. In information-scarce contexts, critical decisions are by necessity made using imperfect reasoning or the use of “gap-filling heuristics” that can lead to several predictable cognitive biases. Individuals might defer to an authority figure (messenger bias, authority bias); they may look to see what others are doing (“bandwagon” and social norm effects); or may make affective forecasting errors, projecting current emotional states onto one’s future self. The perceived or actual urgency of clinical decisions can add further biases, like ambiguity aversion (preference for known versus unknown risks38) or deferral to the status quo or default, and loss aversion (weighing losses more heavily than gains of the same magnitude). These biases are intended to mitigate risks of the unknown when fast decisions must be made, but they do not always get us closer to arriving at the “best” course of action if all possible information were available.

(cut)

Conclusion

We echo others’ calls that before AI tools are “released into the wild,” we must better understand their outcomes and impacts in the hands of imperfect human actors by testing at least some of them according to a risk-based approach in clinical trials that reflect their intended use settings. We advance this proposal by drawing attention to the need to empirically identify and test how specific user biases and decision contexts shape how AI tools are used in practice and influence patient outcomes. We propose that VSD can be used to strategize human-machine interfaces in ways that encourage critical reflection, mitigate bias, and reduce overreliance on AI systems in clinical decision making. We believe this approach can help to reduce some of the burdens on physicians to figure out on their own (with only basic training or knowledge about AI) the optimal role of AI tools in decision making by embedding a degree of bias mitigation directly into AI systems and interfaces.

Tuesday, February 7, 2023

UnitedHealthcare Tried to Deny Coverage to a Chronically Ill Patient. He Fought Back, Exposing the Insurer’s Inner Workings.

By D. Armstron, R. Rucker, & M. Miller
ProPublica.org
Originally published 2 FEB 23

Here is an excerpt:

Insurers have wide discretion in crafting what is covered by their policies, beyond some basic services mandated by federal and state law. They often deny claims for services that they deem not “medically necessary.”

When United refused to pay for McNaughton's treatment for that reason, his family did something unusual. They fought back with a lawsuit, which uncovered a trove of materials, including internal emails and tape-recorded exchanges among company employees. Those records offer an extraordinary behind-the-scenes look at how one of America's leading health care insurers relentlessly fought to reduce spending on care, even as its profits rose to record levels.

As United reviewed McNaughton’s treatment, he and his family were often in the dark about what was happening or their rights. Meanwhile, United employees misrepresented critical findings and ignored warnings from doctors about the risks of altering McNaughton’s drug plan.

At one point, court records show, United inaccurately reported to Penn State and the family that McNaughton’s doctor had agreed to lower the doses of his medication. Another time, a doctor paid by United concluded that denying payments for McNaughton’s treatment could put his health at risk, but the company buried his report and did not consider its findings. The insurer did, however, consider a report submitted by a company doctor who rubber-stamped the recommendation of a United nurse to reject paying for the treatment.

United declined to answer specific questions about the case, even after McNaughton signed a release provided by the insurer to allow it to discuss details of his interactions with the company. United noted that it ultimately paid for all of McNaughton’s treatments. In a written response, United spokesperson Maria Gordon Shydlo wrote that the company’s guiding concern was McNaughton’s well-being.

“Mr. McNaughton’s treatment involves medication dosages that far exceed FDA guidelines,” the statement said. “In cases like this, we review treatment plans based on current clinical guidelines to help ensure patient safety.”

But the records reviewed by ProPublica show that United had another, equally urgent goal in dealing with McNaughton. In emails, officials calculated what McNaughton was costing them to keep his crippling disease at bay and how much they would save if they forced him to undergo a cheaper treatment that had already failed him. As the family pressed the company to back down, first through Penn State and then through a lawsuit, the United officials handling the case bristled.

Monday, February 6, 2023

How Far Is Too Far? Crossing Boundaries in Therapeutic Relationships

Gloria Umali
American Professional Agency
Risk Management Report
January 2023

While there appears to be a clear understanding of what constitutes a boundary violation, defining the boundary remains challenging as the line can be ambiguous with often no right or wrong answer. The APA Ethical Principles and Code of Conduct (2017) (“Ethics Code”) provides guidance on boundary and relationship questions to guide Psychologists toward an ethical course of action. The Ethics Code states that relationships which give rise to the potential for exploitation or harm to the client, or those that impair objectivity in judgment, must be avoided.

Boundary crossing, if allowed to progress, may hurt both the therapist and the client.  The good news is that a consensus exists among professionals in the mental health community that there are boundary crossings which are unquestionably considered helpful and therapeutic to clients. However, with no straightforward formula to delineate between helpful boundaries and harmful or unhealthy boundaries, the resulting ‘grey area’ creates challenges for most psychologists. Examining the general public’s perception and understanding of what an unhealthy boundary crossing looks like may provide additional insight on the right ethical course of action, including the impact of boundary crossing on relationships on a case-by-case basis. 

(cut)

Conclusion

Attaining and maintaining healthy boundaries is a goal that all psychologists should work toward while providing supportive therapy services to clients. Strong and consistent boundaries build trust and make therapy safe for both the client and the therapist. Building healthy boundaries not only promotes compliance with the Ethics Code, but also lets clients know you have their best interest in mind. In summation, while concerns for a client’s wellbeing can cloud judgement, the use of both the risk considerations above and the APA Ethical Principles of Psychologists and Code of Conduct, can assist in clarifying the boundary line and help provide a safe and therapeutic environment for all parties involved. 


A good risk management reminder for psychologists.