Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, February 28, 2023

Transformative experience and the right to revelatory autonomy

Farbod Akhlaghi
Originally Published: 31 December 2022


Sometimes it is not us but those to whom we stand in special relations that face transformative choices: our friends, family or beloved. A focus upon first-personal rational choice and agency has left crucial ethical questions regarding what we owe to those who face transformative choices largely unexplored. In this paper I ask: under what conditions, if any, is it morally permissible to interfere to try to prevent another from making a transformative choice? Some seemingly plausible answers to this question fail precisely because they concern transformative experiences. I argue that we have a distinctive moral right to revelatory autonomy grounded in the value of autonomous self-making. If this right is outweighed then, I argue, interfering to prevent another making a transformative choice is permissible. This conditional answer lays the groundwork for a promising ethics of transformative experience.


Ethical questions regarding transformative experiences are morally urgent. A complete answer to our question requires ascertaining precisely how strong the right to revelatory autonomy is and what competing considerations can outweigh it. These are questions for another time, where the moral significance of revelation and self-making, the competing weight of moral and non-moral considerations, and the sense in which some transformative choices are more significant to one’s identity and self-making than others must be further explored.

But to identify the right to revelatory autonomy and duty of revelatory non-interference is significant progress. For it provides a framework to address the ethics of transformative experience that avoids complications arising from the epistemic peculiarities of transformative experiences. It also allows us to explain cases where we are permitted to interfere in another’s transformative choice and why interference in some choices is harder to justify than others, whilst recognizing plausible grounds for the right to revelatory autonomy itself in the moral value of autonomous self-making. This framework, moreover, opens novel avenues of engagement with wider ethical issues regarding transformative experience, for example concerning social justice or surrogate transformative choice-making. It is, at the very least, a view worthy of further consideration.

This reasoning applies to psychologists in psychotherapy.  Unless significant danger is present, psychologists need to avoid intrusive advocacy, meaning pulling autonomy away from the patient.  Soft paternalism can occur in psychotherapy, when trying to avoid significant harm.

Monday, February 27, 2023

Domestic violence hotline calls will soon be invisible on your family phone plan

Ashley Belanger
ARS Technica
Originally published 17 FEB 23

Today, the Federal Communications Commission proposed rules to implement the Safe Connections Act, which President Joe Biden signed into law last December. Advocates consider the law a landmark move to stop tech abuse. Under the law, mobile service providers are required to help survivors of domestic abuse and sexual violence access resources and maintain critical lines of communication with friends, family, and support organizations.

Under the proposed rules, mobile service providers are required to separate a survivor’s line from a shared or family plan within two business days. Service providers must also “omit records of calls or text messages to certain hotlines from consumer-facing call and text message logs,” so that abusers cannot see when survivors are seeking help. Additionally, the FCC plans to launch a “Lifeline” program, providing emergency communications support for up to six months for survivors who can’t afford to pay for mobile services.

“These proposed rules would help survivors obtain separate service lines from shared accounts that include their abusers, protect the privacy of calls made by survivors to domestic abuse hotlines, and provide support for survivors who suffer from financial hardship through our affordability programs,” the FCC’s announcement said.

The FCC has already consulted with tech associations and domestic violence support organizations in forming the proposed rules, but now the public has a chance to comment. An FCC spokesperson confirmed to Ars that comments are open now. Crystal Justice, the National Domestic Violence Hotline’s chief external affairs officer, told Ars that it’s critical for survivors to submit comments to help inform FCC rules with their experiences of tech abuse.

To express comments, visit this link and fill in “22-238” as the proceeding number. That will auto-populate a field that says “Supporting Survivors of Domestic and Sexual Violence.”

FCC’s spokesperson told Ars that the initial public comment period will be open for 30 days after the rules are published in the federal register, and then a reply comment period will be open for 30 days after the initial comment period ends.

Sunday, February 26, 2023

Time pressure reduces misinformation discrimination ability but does not alter response bias

Sultan, M., Tump, A.N., Geers, M. et al. 
Sci Rep 12, 22416 (2022).


Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people’s ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants’ ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.


In this study, we investigated the impact of time pressure on people’s ability to judge the veracity of online misinformation in terms of (a) discrimination ability, (b) response bias, and (c) four key determinants of misinformation susceptibility (i.e., analytical thinking, ideological congruency, motivated reflection, and familiarity). We found that time pressure reduced discrimination ability but did not alter the—already present—negative response bias (i.e., general tendency to evaluate news as false). Moreover, the associations observed for the four determinants of misinformation susceptibility were largely stable across treatments, with the exception that the positive effect of familiarity on response bias (i.e., response tendency to treat familiar news as true) was slightly reduced under time pressure. We discuss each of these findings in more detail next.

As predicted, we found that time pressure reduced discrimination ability: Participants under time pressure were less able to distinguish between true and false news. These results corroborate earlier work on the speed–accuracy trade-off, and indicate that fast-paced news consumption on social media is likely leading to people misjudging the veracity of not only false news, as seen in the study by Bago and colleagues, but also true news. Like in their paper, we stress that interventions aimed at mitigating misinformation should target this phenomenon and seek to improve veracity judgements by encouraging deliberation. It will also be important to follow up on these findings by examining whether time pressure has a similar effect in the context of news items that have been subject to interventions such as debunking.

Our results for the response bias showed that participants had a general tendency to evaluate news headlines as false (i.e., a negative response bias); this effect was similarly strong across the two treatments. From the perspective of the individual decision maker, this response bias could reflect a preference to avoid one type of error over another (i.e., avoiding accepting false news as true more than rejecting true news as false) and/or an overall expectation that false news are more prevalent than true news in our experiment. Note that the ratio of true versus false news we used (1:1) is different from the real world, which typically is thought to contain a much smaller fraction of false news. A more ecologically valid experiment with a more representative sample could yield a different response bias. It will, thus, be important for future studies to assess whether participants hold such a bias in the real world, are conscious of this response tendency, and whether it translates into (in)accurate beliefs about the news itself.

Saturday, February 25, 2023

Five Steps to Get Students Thinking About Ethics

Karen Cotter, Laura Bond, & Lauren Fullmer
The Greater Good Science Center
Originally posted 22 FEB 23

Here is an excerpt and the 5 steps:

Five steps for ethical decision-making

Teaching ethical thinking aligns with the mission you may have as an educator to promote global citizenship. “Being a global citizen means understanding that global ideas and solutions must still fit the complexities of local contexts and cultures, and meet each community’s specific needs and capacities,” explains AFS-USA. While investigating real-world problems from many perspectives, students gain an appreciation for many sides of an issue and avoid the pitfall of simply reinforcing their preexisting attitudes.

Ethical thinking also enriches social-emotional learning. According to researchers Michael D. Burroughs and Nikolaus J. Barkauskas, “By focusing on social, emotional, and ethical literacy in schools educators can contribute to the development of persons with greater self-awareness, emotional understanding and, in turn, the capability to act ethically and successfully interact with others in a democratic society.” The five steps below serve as a seamless way to integrate ethical decision making into a science or STEM class.

These steps come from our Prosocial Design Process for Ethical Decision-Making, which itself is a synthesis of three frameworks: prosocial education (which focuses on promoting emotional, social, moral, and civic capacities that express character in students), the Engineering Design Process (an open-ended problem-solving practice that encourages growth from failure), and the IDEA Ethical Decision-Making Framework. This process offers a way for students to come up with creative solutions to a problem and bring ethical consideration to global issues.

1. Ask questions to identify the issue.
2. Consider the perspectives of people impacted to brainstorm solutions. 
3. Analyze research to design and test solutions. 
4. Evaluate and iterate for an ethically justifiable solution.
5. Communicate findings to all relevant stakeholders. 


This ethical framework guides students to think beyond themselves to identify solutions that impact their community. The added SEL (social-emotional learning) benefits of self-reflection, social awareness, relationship skills, and appreciation of the world around them awaken students’ consciousness of core ethical values, equipping them to make decisions for the greater good. Using prosocial science topics like climate change empowers students to engage in relevant, real-world content to create a more equitable, sustainable, and just world where they experience how their humanity can impact the greater good.

Friday, February 24, 2023

What Do We Owe Lab Animals?

Brandon Keim
The New York Times
Originally published 24 Jan 23

Here is an excerpt:

Scientists often point to the so-called Three Rs, a set of principles first articulated in 1959 by William Russell, a sociologist, and Rex Burch, a microbiologist, to guide experimental research on animals. Researchers are encouraged to replace animals when alternatives are available, reduce the number of animals used and refine their use so as to minimize the infliction of pain and suffering.

These are unquestionably noble aims, ethicists note, but may seem insufficient when compared with the benefits derived from animals. Covid vaccines, for example, which were tested on mice and monkeys, and developed so quickly thanks to decades of animal-based work on mRNA vaccine technology, saved an estimated 20 million lives in their first year of use and earned tens of billions of dollars in revenues.

In light of that dynamic — which applies not only to Covid vaccines, but to many other human lifesaving, fortune-generating therapeutics — some wonder if a fourth R might be warranted: repayment.

Inklings of the idea of repayment can already be found in the research community, most visibly in laboratories that make arrangements for animals — primarily monkeys and other nonhuman primates — to be retired to sanctuaries. In the case of dogs and companion species, including rats, they are sometimes adopted as pets.

“It’s kind of karma,” said Laura Conour, the executive director of Laboratory Animal Resources at Princeton University, which has a retirement arrangement with the Peaceable Primate Sanctuary. “I feel like it balances it out a little bit.” The school has also adopted out guinea pigs, anole lizards and sugar gliders as pets to private citizens, and tries to help with their veterinary care.

Adoption is not an option for animals destined to be killed, however, which raises the question of how the debt can be repaid. Lesley Sharp, a medical anthropologist at Barnard College and author of “Animal Ethos: The Morality of Human-Animal Encounters in Experimental Lab Science,” noted that research labs sometimes create memorials for animals: commemorative plaques, bulletin boards with pictures and poems and informal gatherings in remembrance.

“There is this burden the animal has to carry for humans in the context of science,” Dr. Sharp said. “They require, I think, respect, and to be recognized and honored and mourned.”

She acknowledged that honoring sacrificed animals was not quite the same as giving something back to them. To imagine what that might entail, Dr. Sharp pointed to the practice of donating one’s organs after death. Transplant recipients often want to give something in return, “but the donor is dead,” Dr. Sharp said. “Then you need somebody who is a sort of proxy for them, and that proxy is the close surviving kin.”

If someone receives a cornea or a heart from a pig — or funding to study those procedures — then they might pay for the care of another pig at a farmed animal sanctuary, Dr. Sharp proposed: “You’re going to have animals who stand in for the whole.”

Thursday, February 23, 2023

Moral foundations partially explain the association of the Dark Triad traits with homophobia and transphobia

Kay, C. S., & Dimakis, S. M. (2022, June 24). 


People with antagonistic personality traits are reportedly more racist, sexist, and xenophobic than their non-antagonistic counterparts. In the present studies (N1 = 718; N2 = 267), we examined whether people with antagonistic personality traits are also more likely to hold homophobic and transphobic attitudes, and, if they are, whether this can be explained by their moral intuitions. We found that people high in Machiavellianism, narcissism, and psychopathy are more likely to endorse homophobic and transphobic views. The associations of Machiavellianism and psychopathy with homophobia and transphobia were primarily explained by low endorsement of individualizing moral foundations (i.e., care and fairness), while the association of narcissism with these beliefs was primarily explained by high endorsement of the binding moral foundations (i.e., loyalty, authority, and sanctity). These findings provide insight into the types of people who harbour homophobic and transphobic attitudes and how differences in moral dispositions contribute to their LGBTQ+ prejudice.

General discussion

We conducted two studies to test whether those with antagonistic personality traits (e.g., Machiavellianism, grandiose narcissism, and psychopathy) are more likely to express homonegative and transphobic views, and, if so, whether this is because of their moral intuitions.Study 1 used a convenience sample of 718undergraduate students drawn from a university Human Subjects Pool. It was exploratory, in the sense that we specified no formal hypotheses. That said, we suspected that those with antagonistic personality traits would be more likely to hold homonegative and transphobic attitudes and that they may do so because they dismiss individualizing morals concerns (e.g., do no harm; treat others fairly). At the same time, we suspected that those with antagonistic personality traits would also deemphasize the binding moral foundations (e.g., be loyal to your ingroup; respect authority; avoid contaminants, even those that are metaphysical),weakening any observed associations between the antagonistic personality traits and LGBTQ+ prejudice. The purpose of Study 2 was to examine whether the findings identified in Study 1 would generalize beyond a sample of undergraduate students.  Since we had no reason to suspect the results would differ between Study 1 and Study 2, our preregistered hypotheses for Study 2 were that we would observe the same pattern of results identified in Study 1.

There was clear evidence across both studies that those high in the three antagonistic personality traits were more likely to endorse statements that were reflective of traditional homonegativity, modern homonegativity, general genderism/transphobia, and gender-bashing. All of these associations were moderate-to-large in magnitude (Funder & Ozer, 2019), save for the association between narcissism and traditional homonegativity in Study 1. These results indicate that, on top of harbouring racist(Jones, 2013), xenophobic (Hodson et al., 2009), and sexist (Gluck et al., 2020) attitudes, those high in antagonistic personality traits also harbour homonegative and transphobic attitudes.

Wednesday, February 22, 2023

How and Why People Want to Be More Moral

Sun, J., Wilt, J. A., et al. (2022, October 13).


What types of moral improvements do people wish to make? Do they hope to become more good, or less bad? Do they wish to be more caring? More honest? More loyal? And why exactly do they want to become more moral? Presumably, most people want to improve their morality because this would benefit others, but is this in fact their primary motivation? Here, we begin to investigate these questions. Across two large, preregistered studies (N = 1,818), participants provided open-ended descriptions of one change they could make in order to become more moral; they then reported their beliefs about and motives for this change. In both studies, people most frequently expressed desires to improve their compassion and more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being. Together, these studies provide rich descriptive insights into how ordinary people want to be more moral, and show that they are particularly motivated to do so for their own sake.

From the General Discussion section

Self-Interest is a KeyMotivation for Moral Improvement

What motivates people to be more moral? From the perspective that the function of morality is to suppress selfishness for the benefit of others (Haidt & Kesebir, 2010; Wolf, 1982), we might expect people to believe that moral improvements would primarily benefit others (rather than themselves). By a similar logic, people should also primarily want to be more moral for the sake of others (rather than for their own sake).

Surprisingly, however, this was not overwhelmingly the case. Instead, across both studies, participants were approximately equally split between those who believed that others would benefit the most and those who believed that they themselves would benefit the most(with the exception of compassion; see Figure S2). The finding that people perceive some personal benefits to becoming more moral has been demonstrated in recent research (Sun & Berman, in prep). In light of evidence that moral people tend to be happier (Sun et al., in prep) and that the presence of moral struggles predicts symptoms of depression and anxiety (Exline et al., 2014), such beliefs might also be somewhat accurate.  However, it is unclear why people believe that becoming more moral would benefit themselves more than it would others. Speculatively, one possibility is that people can more vividly imagine the impacts of their own actions on their own well-being, whereas they are much more uncertain about how their actions would affect others—especially when the impacts might be spread across many beneficiaries.

However, it is also possible that this finding only applies to self-selected moral improvements, rather than the universe of all possible moral improvements. That is, when asked what they could do to become more moral, people might more readily think of improvements that would improve their own well-being to a greater extent than the well-being of others. But, if we were to ask people to predict who would benefit the most from various moral improvements that were selected by researchers, people may be less likely to believe that it would be themselves. Future research should systematically study people’s evaluations of how various moral improvements would impact their own and others’ well-being.

Similarly, when explicitly asked for whose sake they were most motivated to make their moral improvement, almost half of the participants admitted that they were most motivated to change for their own sake (rather than for the sake of others).  However, when predicting motivation from both the expected well-being consequences for the self and the well-being consequences for others, we found that people’s perceptions of personal well-being consequences was a significantly stronger predictor in both studies.  In other words, if anything, people are relatively more motivated to make moral improvements for their own sake than for the sake of others.  This is consistent with the findings of another study which examined people’s interest in changing a variety of moral and nonmoral traits, and showed that people are particularly interested in improving the traits that they believed would make them relatively happier (Sun & Berman, in prep). Here, it is striking that personal fulfilment remains the most important motivator of personal improvement even exclusively in the moral domain.

Tuesday, February 21, 2023

Motornomativity: How Social Norms Hide a Major Public Health Hazard

Walker, I., Tapp, A., & Davis, A.
(2022, December 14).


Decisions about motor transport, by individuals and policy-makers, show unconscious biases due to cultural assumptions about the role of private cars - a phenomenon we term motonormativity. To explore this claim, a national sample of 2157 UK adults rated, at random, a set of statements about driving (“People shouldn't drive in highly populated areas where other people have to breathe in the car fumes”) or a parallel set of statements with key words changed to shift context ("People shouldn't smoke in highly populated areas where other people have to breathe in the cigarette fumes"). Such context changes could radically alter responses (75% agreed with "People shouldn't smoke... " but only 17% agreed with "People shouldn't drive... "). We discuss how these biases systematically distort medical and policy decisions and give recommendations for how public policy and health professionals might begin to recognise and address these unconscious biases in their work.


Our survey showed that people can go from agreeing with a health or risk-related proposition to disagreeing with it simply depending on whether it is couched as a driving or non-driving issue. In the most dramatic case, survey respondents felt that obliging people to breathe toxic fumes went from being unacceptable to acceptable depending on whether the fumes came from cigarettes or motor vehicles. It is, objectively, nonsensical that the ethical and public health issues involved in forcing non-consenting people to inhale air-borne toxins should be judged differently depending on their source, but that is what happened here. It seems that normal judgement criteria can indeed be suspended in the specific context of motoring, as we suggested.

Obviously, we used questions in this study that we felt would stand a good chance of demonstrating a difference between how motoring and non-motoring issues were viewed. But choosing questions likely to reveal differences is not the same thing as stacking the deck. We gave the social bias every chance to reveal itself, but that could only happen because it was out there to be revealed. Prentice and Miller (1992) argue that the ease with which a behavioural phenomenon can be triggered is an index of its true magnitude. The ease with which effects appeared in this study was striking: in the final question the UK public went from 17% agreement to 75% agreement just by changing two words in the question whilst leaving its underlying principle unchanged.

Another example of a culturally acceptable (or ingrained) bias for harm. Call it "car blindness" or "motornormativity."

Monday, February 20, 2023

Definition drives design: Disability models and mechanisms of bias in AI technologies

Newman-Griffis, D., et al. (2023).
First Monday, 28(1).


The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision-making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.


The proliferation of artificial intelligence (AI) technologies as behind the scenes tools to support decision-making processes presents significant risks of harm for disabled people. The unspoken assumptions and unquestioned preconceptions that inform AI technology development can serve as mechanisms of bias, building the base problem formulation that guides a technology on reductive and harmful conceptualisations of disability. As we have shown, even when developing AI technologies to address the same overall goal, different definitions of disability can yield highly distinct analytic technologies that reflect contrasting, frequently incompatible decisions in the information to analyse, what analytic process to use, and what the end product of analysis will be. Here we have presented an initial framework to support critical examination of specific design elements in the formulation of AI technologies for data analytics, as a tool to examine the definitions of disability used in their design and the resulting impacts on the technology. We drew on three important historical models of disability that form common foundations for policy, practice, and personal experience today—the medical, social, and relational models—and two use cases in healthcare and government benefits to illustrate how different ways of conceiving of disability can yield technologies that contrast and conflict with one another, creating distinct risks for harm.

Sunday, February 19, 2023

Organs in exchange for freedom? Bill raises ethical concerns

Steve LeBlanc
Associated Press
Originally published 8 FEB 23

BOSTON (AP) — A proposal to let Massachusetts prisoners donate organs and bone marrow to shave time off their sentence is raising profound ethical and legal questions about putting undue pressure on inmates desperate for freedom.

The bill — which faces a steep climb in the Massachusetts Statehouse — may run afoul of federal law, which bars the sale of human organs or acquiring one for “valuable consideration.”

It also raises questions about whether and how prisons would be able to appropriately care for the health of inmates who go under the knife to give up organs. Critics are calling the idea coercive and dehumanizing even as one of the bill’s sponsors is framing the measure as a response to the over-incarceration of Hispanic and Black people and the need for matching donors in those communities.

“The bill reads like something from a dystopian novel,” said Kevin Ring, president of Families Against Mandatory Minimums, a Washington, D.C.-based criminal justice reform advocacy group. “Promoting organ donation is good. Reducing excessive prison terms is also good. Tying the two together is perverse.”


Offering reduced sentences in exchange for organs is not only unethical, but also violates federal law, according to George Annas, director of the Center for Health Law, Ethics & Human Rights at the Boston University School of Public Health. Reducing a prison sentence is the equivalent of a payment, he said.

“You can’t buy an organ. That should end the discussion,” Annas said. “It’s compensation for services. We don’t exploit prisoners enough?”

Democratic state Rep. Carlos Gonzalez, another co-sponsor of the bill, defended the proposal, calling it a voluntary program. He also said he’s open to establishing a policy that would allow inmates to donate organs and bone marrow without the lure of a reduced sentence. There is currently no law against prisoner organ donation in Massachusetts, he said.

“It’s not quid pro quo. We are open to setting policy without incentives,” Gonzalez said, adding that it is “crucial to respect prisoners’ human dignity and agency by respecting their choice to donate bone marrow or an organ.”

Saturday, February 18, 2023

More Physicians Are Experiencing Burnout and Depression

Christine Lehmann
Originally poste 1 FEB 23

More than half of physicians reported feeling burned out this year and nearly 1 in 4 doctors reported feeling depressed — the highest percentages in 5 years, according to the 'I Cry but No One Cares': Physician Burnout & Depression Report 2023.

"Burnout leaves you feeling like someone you're not," said Amaryllis Sánchez, MD, a board-certified family physician and certified physician coach.

"When someone is burned out, they experience extreme exhaustion in the workplace, depersonalization, and a sense that their best is no longer good enough. Over time, this may spill into the rest of their lives, affecting their relationships as well as their general health and well-being," said Sánchez.

When feelings of burnout continue without effective interventions, they can lead to depression, anxiety, and more, she said.

Burnout can persist for months to even years — nearly two thirds of doctors surveyed said their burnout lasted for at least 13 months, and another 30% said it lasted for more than 2 years.

The majority of doctors attributed their burnout to too many bureaucratic tasks, although more than one third said it was because their co-workers treated them with a lack of respect.

"This disrespect can take many forms from demeaning comments toward physicians in training to the undermining of a physicians' decade-long education and training to instances of rudeness or incivility in the exam room. Unfortunately, medical professionals can be the source of bad behavior and disrespect. They may be burned out too, and doing their best to work in a broken healthcare system during an extremely difficult time," said Sánchez.

Friday, February 17, 2023

Free Will Is Only an Illusion if You Are, Too

Alessandra Buccella and Tomáš Dominik
Scientific American
Originally posted January 16, 2023

Here is an excerpt:

In 2019 neuroscientists Uri Maoz, Liad Mudrik and their colleagues investigated that idea. They presented participants with a choice of two nonprofit organizations to which they could donate $1,000. People could indicate their preferred organization by pressing the left or right button. In some cases, participants knew that their choice mattered because the button would determine which organization would receive the full $1,000. In other cases, people knowingly made meaningless choices because they were told that both organizations would receive $500 regardless of their selection. The results were somewhat surprising. Meaningless choices were preceded by a readiness potential, just as in previous experiments. Meaningful choices were not, however. When we care about a decision and its outcome, our brain appears to behave differently than when a decision is arbitrary.

Even more interesting is the fact that ordinary people’s intuitions about free will and decision-making do not seem consistent with these findings. Some of our colleagues, including Maoz and neuroscientist Jake Gavenas, recently published the results of a large survey, with more than 600 respondents, in which they asked people to rate how “free” various choices made by others seemed. Their ratings suggested that people do not recognize that the brain may handle meaningful choices in a different way from more arbitrary or meaningless ones. People tend, in other words, to imagine all their choices—from which sock to put on first to where to spend a vacation—as equally “free,” even though neuroscience suggests otherwise.

What this tells us is that free will may exist, but it may not operate in the way we intuitively imagine. In the same vein, there is a second intuition that must be addressed to understand studies of volition. When experiments have found that brain activity, such as the readiness potential, precedes the conscious intention to act, some people have jumped to the conclusion that they are “not in charge.” They do not have free will, they reason, because they are somehow subject to their brain activity.

But that assumption misses a broader lesson from neuroscience. “We” are our brain. The combined research makes clear that human beings do have the power to make conscious choices. But that agency and accompanying sense of personal responsibility are not supernatural. They happen in the brain, regardless of whether scientists observe them as clearly as they do a readiness potential.

So there is no “ghost” inside the cerebral machine. But as researchers, we argue that this machinery is so complex, inscrutable and mysterious that popular concepts of “free will” or the “self” remain incredibly useful. They help us think through and imagine—albeit imperfectly—the workings of the mind and brain. As such, they can guide and inspire our investigations in profound ways—provided we continue to question and test these assumptions along the way.

Thursday, February 16, 2023

Telehealth Providers Prepare for the Future

Phoebe Kolbert & Charlotte Engrav
Originally posted 9 FEB 23

Here is an excerpt:

Telehealth Abortion Care

The Guttmacher Institute reports that, in 2017, medication abortions accounted for 39 percent of all abortions performed. By 2020, medication abortion usage accounted for 53 percent.

Coplon attributes the rise in telehealth medication abortions to COVID, but the continued use of it, she says, “is due to people’s understanding and acceptance, and also providers being more comfortable with providing pills without having the testing that we prior thought we needed.” 

She would know. Since 2016, Coplon has been part of a coalition of researchers, lawyers and other clinicians looking at telehealth medication abortion and ways to increase access to telehealth services. She now serves as the director of clinical operations at Abortion on Demand. 

In 2018, state policies enacted to support reproductive health were almost triple the number restricting reproductive healthcare. It was the first year in at least two decades where protections outpaced restrictions. 

Restrictions were eased even more when the COVID-19 pandemic made social distancing necessary, and lawmakers loosened restrictions, allowing more healthcare to be practiced online via telehealth. However, the landscape completely changed again in June of this year when the Supreme Court overturned the longstanding precedent of Roe in their Dobbs decision. Now, 18 states have abortion bans, 14 of which are total or near total. Eight other states have abortion bans on the books that are currently blocked, and there has been a push from anti-abortion groups to rescind access to telehealth medication abortions altogether. 

Telemedicine abortion has many benefits beyond preventing the spread of COVID-19—which may be why anti-abortion groups have been so quick to target it. Telehealth can make abortions more accessible for those who want and need them, and they tend to be cheaper and easier to schedule quickly. Even before Roe’s fall, patients would sometimes have to travel out of state or drive hours to the only abortion clinic in their state. Now, people living in states with bans must travel an average of 276 miles each way. States without bans have seen a swell of out-of-state patients seeking legal abortions. Bloomberg News estimated Illinois could face an 8,000 percent increase in abortion seekers. Planned Parenthood of Illinois estimated an increase of 20,000-30,000 out-of-state patients. Some clinics are struggling to keep up. For these clinics and patients, Coplon notes, telehealth can make a huge difference in the post-Roe era.

Not only can telehealth provide appointments within just a day or two of scheduling, as opposed to the potentially weeks-long waits at clinics in some overburdened states, it can also help reduce the overall burden on those in-person clinics—freeing up space for their own clients. 

Wednesday, February 15, 2023

Moralized language predicts hate speech on social media

Kirill Solovev and Nicolas Pröllochs
PNAS Nexus, Volume 2, Issue 1, 
January 2023


Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media.

Significance Statement

This study provides large-scale observational evidence that moralized language fosters the proliferation of hate speech on social media. Specifically, we analyzed three datasets from Twitter covering three domains (politics, news media, and activism) and found that the presence of moralized language in source posts was a robust and meaningful predictor of hate speech in the corresponding replies. These findings offer new insights into the mechanisms underlying the proliferation of hate speech on social media and may help to inform educational applications, counterspeech strategies, and automated methods for hate speech detection.


This study provides observational evidence that moralized language in social media posts is associated with more hate speech in the corresponding replies. We uncovered this link for posts from a diverse set of societal leaders across three domains (politics, news media, and activism). On average, each additional moral word was associated with between 10.76 and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Across the three domains, the effect sizes were most pronounced for activists. A possible reason is that the activists in our data were affiliated with politically left-leaning subjects (climate, animal rights, and LGBTQIA+) that may have been particularly likely to trigger hate speech from right-wing groups. In contrast, our data for politicians and newspeople were fairly balanced and encompassed users from both sides of the political spectrum. Overall, the comparatively large effect sizes underscore the salient role of moralized language on social media. While earlier research has demonstrated that moralized language is associated with greater virality, our work implies that it fosters the proliferation of hate speech.

Tuesday, February 14, 2023

Helping the ingroup versus harming the outgroup: Evidence from morality-based groups

Grigoryan, L, Seo, S, Simunovic, D, & Hoffman, W.
Journal of Experimental Social Psychology
Volume 105, March 2023, 104436


The discrepancy between ingroup favoritism and outgroup hostility is well established in social psychology. Under which conditions does “ingroup love” turn into “outgroup hate”? Studies with natural groups suggest that when group membership is based on (dis)similarity of moral beliefs, people are willing to not only help the ingroup, but also harm the outgroup. The key limitation of these studies is that the use of natural groups confounds the effects of shared morality with the history of intergroup relations. We tested the effect of morality-based group membership on intergroup behavior using artificial groups that help disentangling these effects. We used the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game which differentiates between behavioral options of weak parochialism (helping the ingroup), strong parochialism (harming the outgroup), universal cooperation (helping both groups), and egoism (profiting individually). In three preregistered experiments, we find that morality-based groups exhibit less egoism and more universal cooperation than non-morality-based groups. We also find some evidence of stronger ingroup favoritism in morality-based groups, but no evidence of stronger outgroup hostility. Stronger ingroup favoritism in morality-based groups is driven by expectations from the ingroup, but not the outgroup. These findings contradict earlier evidence from natural groups and suggest that (dis)similarity of moral beliefs is not sufficient to cross the boundary between “ingroup love” and “outgroup hate”.

General discussion

When does “ingroup love” turn into “outgroup hate”? Previous studies conducted on natural groups suggest that centrality of morality to the group’s identity is one such condition: morality-based groups showed more hostility towards outgroups than non-morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). We set out to test this hypothesis in a minimal group setting, using the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game.  Across three pre-registered studies, we found no evidence that morality-based groups show more hostility towards outgroups than non-morality-based groups. Instead, morality-based groups exhibited less egoism and more universal cooperation (helping both the ingroup and the outgroup) than non-morality-based groups. This finding is consistent with earlier research showing that salience of morality makes people more cooperative (Capraro et al., 2019). Importantly, our morality manipulation was not specific to any pro-cooperation moralnorm. Simply asking participants to think about the criteria they use to judge what is right and what is wrong was enough to increase universal cooperation.

Our findings are inconsistent with research showing stronger outgroup hostility in morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). The key difference between the set of studies presented here and the earlier studies that find outgroup hostility in morality-based groups is the use of natural groups in the latter. What potential confounding variables might account for the emergence of outgroup hostility in natural groups?

Monday, February 13, 2023

Belief in Persistent Moral Decline

West, B., & Pizarro, D. A. (2022, June 27).


Across four studies (3 experimental, total n = 199; 1 archival, n = 186,000) we provide evidence that people hold the belief that the world is growing morally worse, and that this belief is consistent across generational, political, and religious lines. When asked directly about which aspects of society are getting better and which are getting worse, people are more likely to list the moral (compared to non-moral) aspects as getting worse (Studies 1-2). When provided with a list of items that are either moral or non-moral, people are more likely to report that moral (compared to non-moral) items are worsening (Study 3). Finally, when asked the question “What is the most important problem facing America today?” participants in a nationally representative survey (Heffington et al., 2019), disproportionately listed problems that fall within the moral domain (Study 4).

General Discussion

We found consistent and strong evidence that people think of social decline in more moral terms than they do social improvement (see Figure1). Participants in our studies consistently listed more morally relevant items (Studies 1-2) when asked what they thought has gotten worse in society compared to what has gotten better.Participants also categorized items pre-coded for moral relevance as declining more frequently than improving (Study 3). Study 4 provided further evidence for our hypothesis that those things people think are problems in society tend to be morally relevant. The majority of the “most important problem[s]” facing America from1939-2015 were issues relevant to moral values.

These findings provide evidence that in general, people tend to believe that our moral values are getting worse over time. We propose that this moral pessimism may serve a functional purpose. Moral values help bind us together and facilitate social cohesion (Graham et al.,2009), cooperation, and the strengthening of ingroup bonds(Curry,2016; Curry et al.,2019). Concern about declining morality (believing that morally relevant things have gotten worse in society over time) could be viewed as concern for maintaining those values that help keep society intact and functioning healthily. To “rest on our laurels” when it comes to being vigilant for moral decline may be unappealing, and people who try to claim that we are doing great, morally speaking, may be viewed as suspect, or not caring as much about our moral values.

Sunday, February 12, 2023

The scientific study of consciousness cannot, and should not, be morally neutral

Mazor, M., Brown, S., et al. (2021, November 12). 
Perspectives on psychological science.
Advance online publication.


A target question for the scientific study of consciousness is how dimensions of consciousness, such as the ability to feel pain and pleasure or reflect on one’s own experience, vary in different states and animal species. Considering the tight link between consciousness and moral status, answers to these questions have implications for law and ethics. Here we point out that given this link, the scientific community studying consciousness may face implicit pressure to carry out certain research programmes or interpret results in ways that justify current norms rather than challenge them. We show that since consciousness largely determines moral status, the use of non-human animals in the scientific study of consciousness introduces a direct conflict between scientific relevance and ethics – the more scientifically valuable an animal model is for studying consciousness, the more difficult it becomes to ethically justify compromises to its well-being for consciousness research. Lastly, in light of these considerations, we call for a discussion of the immediate ethical corollaries of the body of knowledge that has accumulated, and for a more explicit consideration of the role of ideology and ethics in the scientific study of consciousness.

Here is how the article ends:

Finally, we believe consciousness researchers, including those working only with consenting humans, should take an active role in the ethical discussion about these issues, including the use of animal models for the study of consciousness. Studying consciousness, the field has the responsibility of leading the way on these ethical questions and of making strong statements when such statements are justified by empirical findings. Recent examples include discussions of ethical ramifications of neuronal signs of fetal consciousness (Lagercrantz, 2014) and a consolidation of evidence for consciousness in vertebrate animals, with a focus on livestock species, ordered by the European Food and Safety Authority (Le Neindre et al., 2017). In these cases, the science of consciousness provided empirical evidence to weigh on whether a fetus or a livestock animal is conscious. The question of animal models of consciousness is simpler because the presence of consciousness is a prerequisite for the model to be valid. Here, researchers can skip the difficult question of whether the entity is indeed conscious and directly ask, “Do we believe that consciousness, or some specific form or dimension of consciousness, entails moral status?”

It is useful to remind ourselves that ethical beliefs and practices are dynamic: Things that were considered
acceptable in the past are no longer acceptable today.  A relatively recent change is that to the status of nonhuman great apes (gorillas, bonobos, chimpanzees, and orangutans) such that research on great apes is banned in some countries today, including all European Union member states and New Zealand. In these countries, drilling a hole in chimpanzees’ heads, keeping them in isolation, or restricting their access to drinking water are forbidden by law. It is a fundamental question of the utmost importance which differences between animals make some practices acceptable with respect to some animals and not others. If consciousness is a determinant of moral status, consciousness researchers have a responsibility in taking an active part in this discussion—by providing scientific observations that either justify current ethical standards or induce the scientific and legal communities to revise these standards.

Saturday, February 11, 2023

Countertransference awareness and treatment outcome

Abargil, M., & Tishby, O. (2022). 
Journal of counseling psychology,
69(5), 667–677.


Countertransference (CT) is considered a central component in the therapy process. Research has shown that CT management does not reduce the number of CT manifestations in therapy, but it leads to better therapy outcomes. In this study, we examined therapists' awareness of their CT using a structured interview. Our hypotheses were (a) treatments in which therapists were more aware of their CT would have a better outcome and (b) different definitions of CT would be related to different therapy outcomes. Twenty-nine patients were treated by 19 therapists in 16 sessions of short-term psychodynamic therapy. We used the core conflictual relationship theme to measure CT, a special interview was developed to study CT awareness. Results show that awareness of CT defined as the relationship with the patient moderated 10 outcome measures and awareness of CT defined as the relationship with the patient that repeats therapist conflicts with significant others moderated three outcome measures We present examples from dyads in this study and discuss how awareness can help the therapist talk to and handle patient challenges.

From the Discussion section

Increased therapist awareness of CT facilitate improvement in patient symptoms, emotion regulation and affiliation in relationships. Since awareness is an integral part of CT management, these findings are consistent with Hayes’ results from 2018 regarding the importance of CT management and its contribution to treatment outcome. Moreover, therapist’s self-awareness was found to be important in treating minorities (Baker, 1999). This study expands the ecological validity of therapist awareness and shows that the therapists’ awareness of their own wishes in therapy, as well as his perception of himself and the patient, is relevant to the general population as well. Thus, therapists of all theoretical orientations are encouraged to attend to their personal conflicts and to monitor their reactions to patients as a routine part of effective clinical practice. Moreover, therapist awareness has been found in the past to lead to less therapist self-confidence, but to better treatment outcomes (Williams, 2008). Our clinical examples illustrate these findings (the therapist who had high awareness showed much more self- doubt) and the results of multilevel regression analysis demonstrate better improvement for patients whose therapists were highly aware. Interestingly, the IIP control dimension was not found to be related to the therapist’s awareness of CT. It may be that since this dimension relates to the patient’s control need, the awareness of transference is more important. Another possibility is that the patient’s experience of the therapist as “knowing” may actually increase his control needs. Moreover, regarding patient main TC, we only found a trend and not a significant interaction. One reason may be the sample size. Another explanation is that patients do not necessarily link the changes in their lives to the relationship with the therapist and the insights associated with it. Thus, although awareness of CT helps to improve other outcome measures, it is not related to the way patients feel about the reason they sought out treatment.

A recent study of CT found that negative types of CT were correlated with more ruptures and less repair in the alliance. For positive CT the picture is more complex; Positive patterns predicted resolution when the therapists repeated positive patterns with par- ents but predicted ruptures when they tried to “repair” negative patterns with the parents (Tishby & Wiseman, 2020). The authors suggest that awareness of CT will help the therapist pay more attention to ruptures during treatment so they can address it and initiate resolutions processes. Our findings support the authors’ suggestion. The clinical example demonstrates that when the therapist was aware of negative CT and was able to talk about it in the awareness interview, he was also able to address the difficult feelings that arose during a session with the patient. Moreover, the treatment outcomes in these treatments were better which characterizes treatments with proper repair processes.

Friday, February 10, 2023

Individual differences in (dis)honesty are represented in the brain's functional connectivity at rest

Speer, S. P., Smidts, A., & Boksem, M. A. (2022).
NeuroImage, 246, 118761.


Measurement of the determinants of socially undesirable behaviors, such as dishonesty, are complicated and obscured by social desirability biases. To circumvent these biases, we used connectome-based predictive modeling (CPM) on resting state functional connectivity patterns in combination with a novel task which inconspicuously measures voluntary cheating to gain access to the neurocognitive determinants of (dis)honesty. Specifically, we investigated whether task-independent neural patterns within the brain at rest could be used to predict a propensity for (dis)honest behavior. Our analyses revealed that functional connectivity, especially between brain networks linked to self-referential thinking (vmPFC, temporal poles, and PCC) and reward processing (caudate nucleus), reliably correlates, in an independent sample, with participants’ propensity to cheat. Participants who cheated the most also scored highest on several self-report measures of impulsivity which underscores the generalizability of our results. Notably, when comparing neural and self-report measures, the neural measures were found to be more important in predicting cheating propensity.

Significance statement

Dishonesty pervades all aspects of life and causes enormous economic losses. However, because the underlying mechanisms of socially undesirable behaviors are difficult to measure, the neurocognitive determinants of individual differences in dishonesty largely remain unknown. Here, we apply machine-learning methods to stable patterns of neural connectivity to investigate how dispositions toward (dis)honesty, measured by an innovative behavioral task, are encoded in the brain. We found that stronger connectivity between brain regions associated with self-referential thinking and reward are predictive of the propensity to be honest. The high predictive accuracy of our machine-learning models, combined with the reliable nature of resting-state functional connectivity, which is uncontaminated by the social-desirability biases to which self-report measures are susceptible, provides an excellent avenue for the development of useful neuroimaging-based biomarkers of socially undesirable behaviors.


Employing connectome-based predictive modeling (CPM) in combination with the innovative Spot-The-Differences task, which allows for inconspicuously measuring cheating, we identified a functional connectome that reliably predicts a disposition toward (dis)honesty in an independent sample. We observed a Pearson correlation between out-of-sample predicted and actual cheatcount (r = 0.40) that resides on the higher side of the typical range of correlations (between r = 0.2 and r = 0.5) reported in previous studies employing CPM (Shen et al., 2017). Thus, functional connectivity within the brain at rest predicts whether someone is more honest or more inclined to cheat in our task.

In light of previous research on moral decisions, the regions we identified in our resting state analysis can be associated with two networks frequently found to be involved in moral decision making. First, the vmPFC, the bilateral temporal poles and the PCC have consistently been associated with self-referential thinking. For example, it has been found that functional connectivity between these areas during rest is associated with higher-level metacognitive operations such as self-reflection, introspection and self-awareness (Gusnard et al., 2001; Meffert et al., 2013; Northoff et al., 2006; Vanhaudenhuyse et al., 2011). Secondly, the caudate nucleus, which has been found to be involved in anticipation and valuation of rewards (Ballard and Knutson, 2009; Knutson et al., 2001) can be considered an important node in the reward network (Bartra et al., 2013). Participants with higher levels of activation in the reward network, in anticipation of rewards, have previously been found to indeed be more dishonest (Abe and Greene, 2014).

Thursday, February 9, 2023

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Sætra, H.S., Danaher, J. 
Philos. Technol. 35, 93 (2022).


Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

From the Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

Wednesday, February 8, 2023

AI in the hands of imperfect users

Kostick-Quenet, K.M., Gerke, S. 
npj Digit. Med. 5, 197 (2022). 


As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML’s human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.


Impacts of uncertainty and urgency on decision quality

Trust plays a particularly critical role when decisions are made in contexts of uncertainty. Uncertainty, of course, is a central feature of most clinical decision making, particularly for conditions (e.g., COVID-1930) or treatments (e.g., deep brain stimulation or gene therapies) that lack a long history of observed outcomes. As Wang and Busemeyer (2021) describe, “uncertain” choice situations can be distinguished from “risky” ones in that risky decisions have a range of outcomes with known odds or probabilities. If you flip a coin, we know we have a 50% chance to land on heads. However, to bet on heads comes with a high level of risk, specifically, a 50% chance of losing. Uncertain decision-making scenarios, on the other hand, have no well-known or agreed-upon outcome probabilities. This also makes uncertain decision making contexts risky, but those risks are not sufficiently known to the extent that permits rational decision making. In information-scarce contexts, critical decisions are by necessity made using imperfect reasoning or the use of “gap-filling heuristics” that can lead to several predictable cognitive biases. Individuals might defer to an authority figure (messenger bias, authority bias); they may look to see what others are doing (“bandwagon” and social norm effects); or may make affective forecasting errors, projecting current emotional states onto one’s future self. The perceived or actual urgency of clinical decisions can add further biases, like ambiguity aversion (preference for known versus unknown risks38) or deferral to the status quo or default, and loss aversion (weighing losses more heavily than gains of the same magnitude). These biases are intended to mitigate risks of the unknown when fast decisions must be made, but they do not always get us closer to arriving at the “best” course of action if all possible information were available.



We echo others’ calls that before AI tools are “released into the wild,” we must better understand their outcomes and impacts in the hands of imperfect human actors by testing at least some of them according to a risk-based approach in clinical trials that reflect their intended use settings. We advance this proposal by drawing attention to the need to empirically identify and test how specific user biases and decision contexts shape how AI tools are used in practice and influence patient outcomes. We propose that VSD can be used to strategize human-machine interfaces in ways that encourage critical reflection, mitigate bias, and reduce overreliance on AI systems in clinical decision making. We believe this approach can help to reduce some of the burdens on physicians to figure out on their own (with only basic training or knowledge about AI) the optimal role of AI tools in decision making by embedding a degree of bias mitigation directly into AI systems and interfaces.

Tuesday, February 7, 2023

UnitedHealthcare Tried to Deny Coverage to a Chronically Ill Patient. He Fought Back, Exposing the Insurer’s Inner Workings.

By D. Armstron, R. Rucker, & M. Miller
Originally published 2 FEB 23

Here is an excerpt:

Insurers have wide discretion in crafting what is covered by their policies, beyond some basic services mandated by federal and state law. They often deny claims for services that they deem not “medically necessary.”

When United refused to pay for McNaughton's treatment for that reason, his family did something unusual. They fought back with a lawsuit, which uncovered a trove of materials, including internal emails and tape-recorded exchanges among company employees. Those records offer an extraordinary behind-the-scenes look at how one of America's leading health care insurers relentlessly fought to reduce spending on care, even as its profits rose to record levels.

As United reviewed McNaughton’s treatment, he and his family were often in the dark about what was happening or their rights. Meanwhile, United employees misrepresented critical findings and ignored warnings from doctors about the risks of altering McNaughton’s drug plan.

At one point, court records show, United inaccurately reported to Penn State and the family that McNaughton’s doctor had agreed to lower the doses of his medication. Another time, a doctor paid by United concluded that denying payments for McNaughton’s treatment could put his health at risk, but the company buried his report and did not consider its findings. The insurer did, however, consider a report submitted by a company doctor who rubber-stamped the recommendation of a United nurse to reject paying for the treatment.

United declined to answer specific questions about the case, even after McNaughton signed a release provided by the insurer to allow it to discuss details of his interactions with the company. United noted that it ultimately paid for all of McNaughton’s treatments. In a written response, United spokesperson Maria Gordon Shydlo wrote that the company’s guiding concern was McNaughton’s well-being.

“Mr. McNaughton’s treatment involves medication dosages that far exceed FDA guidelines,” the statement said. “In cases like this, we review treatment plans based on current clinical guidelines to help ensure patient safety.”

But the records reviewed by ProPublica show that United had another, equally urgent goal in dealing with McNaughton. In emails, officials calculated what McNaughton was costing them to keep his crippling disease at bay and how much they would save if they forced him to undergo a cheaper treatment that had already failed him. As the family pressed the company to back down, first through Penn State and then through a lawsuit, the United officials handling the case bristled.

Monday, February 6, 2023

How Far Is Too Far? Crossing Boundaries in Therapeutic Relationships

Gloria Umali
American Professional Agency
Risk Management Report
January 2023

While there appears to be a clear understanding of what constitutes a boundary violation, defining the boundary remains challenging as the line can be ambiguous with often no right or wrong answer. The APA Ethical Principles and Code of Conduct (2017) (“Ethics Code”) provides guidance on boundary and relationship questions to guide Psychologists toward an ethical course of action. The Ethics Code states that relationships which give rise to the potential for exploitation or harm to the client, or those that impair objectivity in judgment, must be avoided.

Boundary crossing, if allowed to progress, may hurt both the therapist and the client.  The good news is that a consensus exists among professionals in the mental health community that there are boundary crossings which are unquestionably considered helpful and therapeutic to clients. However, with no straightforward formula to delineate between helpful boundaries and harmful or unhealthy boundaries, the resulting ‘grey area’ creates challenges for most psychologists. Examining the general public’s perception and understanding of what an unhealthy boundary crossing looks like may provide additional insight on the right ethical course of action, including the impact of boundary crossing on relationships on a case-by-case basis. 



Attaining and maintaining healthy boundaries is a goal that all psychologists should work toward while providing supportive therapy services to clients. Strong and consistent boundaries build trust and make therapy safe for both the client and the therapist. Building healthy boundaries not only promotes compliance with the Ethics Code, but also lets clients know you have their best interest in mind. In summation, while concerns for a client’s wellbeing can cloud judgement, the use of both the risk considerations above and the APA Ethical Principles of Psychologists and Code of Conduct, can assist in clarifying the boundary line and help provide a safe and therapeutic environment for all parties involved. 

A good risk management reminder for psychologists.