Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, December 18, 2020

Are Free Will Believers Nicer People? (Four Studies Suggest Not)

Crone DL, & Levy NL. 
Social Psychological and 
Personality Science. 2019;10(5):612-619. 
doi:10.1177/1948550618780732

Abstract

Free will is widely considered a foundational component of Western moral and legal codes, and yet current conceptions of free will are widely thought to fit uncomfortably with much research in psychology and neuroscience. Recent research investigating the consequences of laypeople’s free will beliefs (FWBs) for everyday moral behavior suggests that stronger FWBs are associated with various desirable moral characteristics (e.g., greater helpfulness, less dishonesty). These findings have sparked concern regarding the potential for moral degeneration throughout society as science promotes a view of human behavior that is widely perceived to undermine the notion of free will. We report four studies (combined N = 921) originally concerned with possible mediators and/or moderators of the abovementioned associations. Unexpectedly, we found no association between FWBs and moral behavior. Our findings suggest that the FWB–moral behavior association (and accompanying concerns regarding decreases in FWBs causing moral degeneration) may be overstated.

(Bold added by me.)

Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Wednesday, December 16, 2020

If a robot is conscious, is it OK to turn it off? The moral implications of building true AIs

Anand Vaidya
The Conversation
Originally posted 27 Oct 20

Here is an excerpt:

There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.

In contrast, there’s also access consciousness. That’s the ability to report, reason, behave and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.

Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted – an example of access consciousness without phenomenal consciousness.

Data is an android. How do these distinctions play out with respect to him?

The Data dilemma

The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.

Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.

He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.

However, Data most likely lacks phenomenal consciousness - he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness – can grab the pen – but across all his senses he lacks phenomenal consciousness.

Tuesday, December 15, 2020

(How) Do You Regret Killing One to Save Five? Affective and Cognitive Regret Differ After Utilitarian and Deontological Decisions

Goldstein-Greenwood J, et al.
Personality and Social Psychology 
Bulletin. 2020;46(9):1303-1317. 
doi:10.1177/0146167219897662

Abstract

Sacrificial moral dilemmas, in which opting to kill one person will save multiple others, are definitionally suboptimal: Someone dies either way. Decision-makers, then, may experience regret about these decisions. Past research distinguishes affective regret, negative feelings about a decision, from cognitive regret, thoughts about how a decision might have gone differently. Classic dual-process models of moral judgment suggest that affective processing drives characteristically deontological decisions to reject outcome-maximizing harm, whereas cognitive deliberation drives characteristically utilitarian decisions to endorse outcome-maximizing harm. Consistent with this model, we found that people who made or imagined making sacrificial utilitarian judgments reliably expressed relatively more affective regret and sometimes expressed relatively less cognitive regret than those who made or imagined making deontological dilemma judgments. In other words, people who endorsed causing harm to save lives generally felt more distressed about their decision, yet less inclined to change it, than people who rejected outcome-maximizing harm.

General Discussion

Across four studies, we found that different sacrificial moral dilemma decisions elicit different degrees of affective and cognitive regret. We found robust evidence that utilitarian decision-makers who accept outcome-maximizing harm experience far more affective regret than their deontological decision-making counterparts who reject outcome-maximizing harm, and we found somewhat weaker evidence that utilitarian decision-makers experience less cognitive regret than deontological decision-makers.The significant interaction between dilemma decision and regret type predicted in H1 emerged both when participants freely endorsed dilemma decisions (Studies 1, 3, and 4) and were randomly assigned to imagine making a decision (Study 2). Hence, the present findings cannot simply be attributed to chronic differences in the types of regret that people who prioritize each decision experience. Moreover, we found tentative evidence for H2: Focusing on the counterfactual world in which they made the alternative decision attenuated utilitarian decision-makers’ heightened affective regret compared with factual reflection, and reduced differences in affective regret between utilitarian and deontological decision-makers (Study 4). Furthermore, our findings do not appear attributable to impression management concerns, as there were no differences between public and private reports of regret.

Conspiracy Theorists May Really Just Be Lonely

Matthew Hutson
Scientific American
Originally posted 1 May 17

Conspiracy theorists are often portrayed as nutjobs, but some may just be lonely, recent studies suggest. Separate research has shown that social exclusion creates a feeling of meaninglessness and that the search for meaning leads people to perceive patterns in randomness. A new study in the March issue of the Journal of Experimental Social Psychology connects the dots, reporting that ostracism enhances superstition and belief in conspiracies.

In one experiment, people wrote about a recent unpleasant interaction with friends, then rated their feelings of exclusion, their search for purpose in life, their belief in two conspiracies (that the government uses subliminal messages and that drug companies withhold cures), and their faith in paranormal activity in the Bermuda Triangle. The more excluded people felt, the greater their desire for meaning and the more likely they were to harbor suspicions.

In a second experiment, college students were made to feel excluded or included by their peers, then read two scenarios suggestive of conspiracies (price-fixing, office sabotage) and one about a made-up good-luck ritual (stomping one's feet before a meeting). Those who were excluded reported greater connection between behaviors and outcomes in the stories compared with those who were included.


Monday, December 14, 2020

The COVID-19 era: How therapists can diminish burnout symptoms through self-care

Rokach, A., & Boulazreg, S. (2020). 
Current psychology,1–18. 
Advance online publication. 

Abstract

COVID-19 is a frightening, stress-inducing, and unchartered territory for all. It is suggested that stress, loneliness, and the emotional toll of the pandemic will result in increased numbers of those who will seek psychological intervention, need support, and guidance on how to cope with a time period that none of us were prepared for. Psychologists, in general, are trained in and know how to help others. They are less effective in taking care of themselves, so that they can be their best in helping others. The article, which aims to heighten clinicians’ awareness of the need for self-care, especially now in the post-pandemic era, describes the demanding nature of psychotherapy and the initial resistance by therapists to engage in self-care, and outlines the consequences of neglecting to care for themselves. We covered the demanding nature of psychotherapy and its grinding trajectory, the loneliness and isolation felt by clinicians in private practice, the professional hazards faced by those caring for others, and the creative and insightful ways that mental health practitioners can care for themselves for the good of their clients, their families, and obviously, themselves.

Here is an excerpt:

Navigating Ethical Dilemmas

An important impact of competence constellations is its aid to clinicians facing challenging dilemmas in the therapy room. While numerous guidelines and recommendations based on a code of ethics exist, real-life situations often blur the line between what the professional wishes to do, rather than what the recommended ethical action is most optimal to the sovereignty of the client. Simply put, “no code of ethics provides a blueprint for resolving all ethical issues, nor does the avoidance of violations always equate with ideal ethical practice, but codes represent the best judgment of one’s peers about common problems and shared professional values.” (Welfel, 2015, p. 10).

As the literature asserts—even in the face of colleagues acting unethically, or below thresholds of competence, psychologists don’t feel comfortable directly approaching their coworkers as they feel concerned about harming their colleagues’ reputation, concerned that the regulatory board may punish their colleague too harshly, or concerned that by reporting a colleague to the regulatory board they will be ostracized by their colleagues (Barnett, 2008; Bernard, Murphy, & Little, 1987; Johnson et al., 2012; Smith & Moss, 2009).

Thus, a constellation network allows a mental health professional to provide feedback without fear of these potential repercussions. Whether it is guised under friendly advice or outright anonymous, these peer networks would allow therapists to exchange information knowingly and allow for constructive criticism to be taken non-judgmentally.

Should you save the more useful? The effect of generality on moral judgments about rescue and indirect effects

Caviola, L., Schubert, S., & Mogensen, A. 
(2020, October 23). 

Abstract

Across eight experiments (N = 2,310), we studied whether people would prioritize rescuing individuals who may be thought to contribute more to society. We found that participants were generally dismissive of general rules that prioritize more socially beneficial individuals, such as doctors instead of unemployed people. By contrast, participants were more supportive of one-off decisions to save the life of a more socially beneficial individual, even when such cases were the same as those covered by the rule. This generality effect occurred robustly even when controlling for various factors. It occurred when the decision-maker was the same in both cases, when the pairs of people differing in the extent of their indirect social utility was varied, when the scenarios were varied, when the participant samples came from different countries, and when the general rule only covered cases that are exactly the same as the situation described in the one-off condition. The effect occurred even when the general rule was introduced via a concrete precedent case. Participants’ tendency to be more supportive of the one-off proposal than the general rule was significantly reduced when they evaluated the two proposals jointly as opposed to separately. Finally, the effect also occurred in sacrificial moral dilemmas, suggesting it is a more general phenomenon in certain moral contexts. We discuss possible explanations of the effect, including concerns about negative consequences of the rule and a deontological aversion against making difficult trade-off decisions unless they are absolutely necessary.

General Discussion

Across our studies we found evidence for a generality effect: participants were more supportive of a proposal to prioritize people who are more beneficial to society than others if this applies to a concrete one-off situation than if it describes a general rule. The effect showed robustly even when controlling for various factors. It occurred even when the decision-maker was the same in both cases (Study 2), when the pairs of people differing in the extent of their indirect social utility was varied (Study 3), when the scenarios were varied (Study 3, Study 6), when the participant samples came from different countries (Study 3), and when the rule only entails cases that are exactly the same as the one-off case (Study 6). The effect also occurred when the general rule was introduced via a concrete precedent case (Study 4 and 6). The tendency to be more supportive of the one-off proposal than the general rule was significantly reduced when participants evaluated the two proposals jointly as opposed to separately (Study 7). Finally, we found that the effect also occurs in sacrificial moral dilemmas (Study 8), suggesting that it is a more general phenomenon in moral contexts.

Sunday, December 13, 2020

Polarization and extremism emerge from rational choice

Kvam, P. D., & Baldwin, M. 
(2020, October 21).

Abstract

Polarization is often thought to be the product of biased information search, motivated reasoning, or other psychological biases. However, polarization and extremism can still occur in the absence of any bias or irrational thinking. In this paper, we show that polarization occurs among groups of decision makers who are implementing rational choice strategies that maximize decision efficiency. This occurs because extreme information enables decision makers to make up their minds and stop considering new information, whereas moderate information is unlikely to trigger a decision. Furthermore, groups of decision makers will generate extremists -- individuals who hold strong views despite being uninformed and impulsive. In re-analyses of seven previous empirical studies on both perceptual and preferential choice, we show that both polarization and extremism manifest across a wide variety of choice paradigms. We conclude by offering theoretically-motivated interventions that could reduce polarization and extremism by altering the incentives people have when gathering information.

Conclusions

In a decision scenario that incentivizes a trade-off between time and decision quality, a population of rational decision makers will become polarized. In this paper, we have shown this through simulations, a mathematical proof (supplementary materials) and demonstrated it empirically in seven studies.   This  leads  us  to  an  unfortunate  but  unavoidable  conclusion that decision making is a bias-inducing process by which  participants  gather  representative  information  from their environment and, through the decision rules they implement, distort it toward the extremes. Such a process also generates extremists, who hold extreme views and carry undue influence over cultural discourse (Navarro et al.,2018) despite being relatively uninformed and impulsive (low thresh-olds;Kim & Lee,2011). We have suggested several avenues for interventions, foremost among them providing incentives favoring estimation or judgments as opposed to incentives for timely decision making. Our hope is that future work testing and implementing these interventions will reduce the prevalence of polarization and extremism across social domains currently occupied by decision makers.

Saturday, December 12, 2020

‘All You Want Is to Be Believed’: The Impacts of Unconscious Bias in Health Care

April Dembosky
KHN.com
Originally published 21 Oct 20

Here is an excerpt:

Research shows how doctors’ unconscious bias affects the care people receive, with Latino and Black patients being less likely to receive pain medications or get referred for advanced care than white patients with the same complaints or symptoms, and more likely to die in childbirth from preventable complications.

In the hospital that day in May, Monterroso was feeling woozy and having trouble communicating, so she had a friend and her friend’s cousin, a cardiac nurse, on the phone to help. They started asking questions: What about Karla’s accelerated heart rate? Her low oxygen levels? Why are her lips blue?

The doctor walked out of the room. He refused to care for Monterroso while her friends were on the phone, she said, and when he came back, the only thing he wanted to talk about was Monterroso’s tone and her friends’ tone.

“The implication was that we were insubordinate,” Monterroso said.

She told the doctor she didn’t want to talk about her tone. She wanted to talk about her health care. She was worried about possible blood clots in her leg and she asked for a CT scan.

“Well, you know, the CT scan is radiation right next to your breast tissue. Do you want to get breast cancer?” Monterroso recalled the doctor saying to her. “I only feel comfortable giving you that test if you say that you’re fine getting breast cancer.”

Monterroso thought to herself, “Swallow it up, Karla. You need to be well.” And so she said to the doctor: “I’m fine getting breast cancer.”

He never ordered the test.

Monterroso asked for a different doctor, for a hospital advocate. No and no, she was told. She began to worry about her safety. She wanted to get out of there. Her friends, all calling every medical professional they knew to confirm that this treatment was not right, came to pick her up and drove her to the University of California-San Francisco. The team there gave her an EKG, a chest X-ray and a CT scan.