Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, February 9, 2023

To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

Sætra, H.S., Danaher, J. 
Philos. Technol. 35, 93 (2022).
https://doi.org/10.1007/s13347-022-00591-7

Abstract

Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.

From the Conclusion

The ethics of technology is garnering attention for a reason. Just about everything in modern society is the result of, and often even infused with, some kind of technology. The ethical implications are plentiful, but how should the study of applied tech ethics be organised? We have reviewed a number of specific tech ethics, and argued that there is much overlap, and much confusion relating to the demarcation of different domain ethics. For example, many issues covered by AI ethics are arguably already covered by computer ethics, and many issues argued to be data ethics, particularly issues related to privacy and surveillance, have been studied by other tech ethicists and non-tech ethicists for a long time.

We have proposed two simple principles that should help guide more ethical research to the higher levels of tech ethics, while still allowing for the existence of lower-level domain specific ethics. If this is achieved, we avoid confusion and a lack of navigability in tech ethics, ethicists avoid reinventing the wheel, and we will be better able to make use of existing insight from higher-level ethics. At the same time, the work done in lower-level ethics will be both valid and highly important, because it will be focused on issues exclusive to that domain. For example, robot ethics will be about those questions that only arise when AI is embodied in a particular sense, and not all issues related to the moral status of machines or social AI in general.

While our argument might initially be taken as a call to arms against more than one fundamental applied ethics, we hope to have allayed such fears. There are valid arguments for the existence of different types of applied ethics, and we merely argue that an exaggerated proliferation of tech ethics is occurring, and that it has negative consequences. Furthermore, we must emphasise that there is nothing preventing anyone from making specific guidelines for, for example, AI professionals, based on insight from computer ethics. The domains of ethics and the needs of practitioners are not the same, and our argument is consequently that ethical research should be more concentrated than professional practice.

Wednesday, February 8, 2023

AI in the hands of imperfect users

Kostick-Quenet, K.M., Gerke, S. 
npj Digit. Med. 5, 197 (2022). 
https://doi.org/10.1038/s41746-022-00737-z

Abstract

As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML’s human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.

(cut)

Impacts of uncertainty and urgency on decision quality

Trust plays a particularly critical role when decisions are made in contexts of uncertainty. Uncertainty, of course, is a central feature of most clinical decision making, particularly for conditions (e.g., COVID-1930) or treatments (e.g., deep brain stimulation or gene therapies) that lack a long history of observed outcomes. As Wang and Busemeyer (2021) describe, “uncertain” choice situations can be distinguished from “risky” ones in that risky decisions have a range of outcomes with known odds or probabilities. If you flip a coin, we know we have a 50% chance to land on heads. However, to bet on heads comes with a high level of risk, specifically, a 50% chance of losing. Uncertain decision-making scenarios, on the other hand, have no well-known or agreed-upon outcome probabilities. This also makes uncertain decision making contexts risky, but those risks are not sufficiently known to the extent that permits rational decision making. In information-scarce contexts, critical decisions are by necessity made using imperfect reasoning or the use of “gap-filling heuristics” that can lead to several predictable cognitive biases. Individuals might defer to an authority figure (messenger bias, authority bias); they may look to see what others are doing (“bandwagon” and social norm effects); or may make affective forecasting errors, projecting current emotional states onto one’s future self. The perceived or actual urgency of clinical decisions can add further biases, like ambiguity aversion (preference for known versus unknown risks38) or deferral to the status quo or default, and loss aversion (weighing losses more heavily than gains of the same magnitude). These biases are intended to mitigate risks of the unknown when fast decisions must be made, but they do not always get us closer to arriving at the “best” course of action if all possible information were available.

(cut)

Conclusion

We echo others’ calls that before AI tools are “released into the wild,” we must better understand their outcomes and impacts in the hands of imperfect human actors by testing at least some of them according to a risk-based approach in clinical trials that reflect their intended use settings. We advance this proposal by drawing attention to the need to empirically identify and test how specific user biases and decision contexts shape how AI tools are used in practice and influence patient outcomes. We propose that VSD can be used to strategize human-machine interfaces in ways that encourage critical reflection, mitigate bias, and reduce overreliance on AI systems in clinical decision making. We believe this approach can help to reduce some of the burdens on physicians to figure out on their own (with only basic training or knowledge about AI) the optimal role of AI tools in decision making by embedding a degree of bias mitigation directly into AI systems and interfaces.

Tuesday, February 7, 2023

UnitedHealthcare Tried to Deny Coverage to a Chronically Ill Patient. He Fought Back, Exposing the Insurer’s Inner Workings.

By D. Armstron, R. Rucker, & M. Miller
ProPublica.org
Originally published 2 FEB 23

Here is an excerpt:

Insurers have wide discretion in crafting what is covered by their policies, beyond some basic services mandated by federal and state law. They often deny claims for services that they deem not “medically necessary.”

When United refused to pay for McNaughton's treatment for that reason, his family did something unusual. They fought back with a lawsuit, which uncovered a trove of materials, including internal emails and tape-recorded exchanges among company employees. Those records offer an extraordinary behind-the-scenes look at how one of America's leading health care insurers relentlessly fought to reduce spending on care, even as its profits rose to record levels.

As United reviewed McNaughton’s treatment, he and his family were often in the dark about what was happening or their rights. Meanwhile, United employees misrepresented critical findings and ignored warnings from doctors about the risks of altering McNaughton’s drug plan.

At one point, court records show, United inaccurately reported to Penn State and the family that McNaughton’s doctor had agreed to lower the doses of his medication. Another time, a doctor paid by United concluded that denying payments for McNaughton’s treatment could put his health at risk, but the company buried his report and did not consider its findings. The insurer did, however, consider a report submitted by a company doctor who rubber-stamped the recommendation of a United nurse to reject paying for the treatment.

United declined to answer specific questions about the case, even after McNaughton signed a release provided by the insurer to allow it to discuss details of his interactions with the company. United noted that it ultimately paid for all of McNaughton’s treatments. In a written response, United spokesperson Maria Gordon Shydlo wrote that the company’s guiding concern was McNaughton’s well-being.

“Mr. McNaughton’s treatment involves medication dosages that far exceed FDA guidelines,” the statement said. “In cases like this, we review treatment plans based on current clinical guidelines to help ensure patient safety.”

But the records reviewed by ProPublica show that United had another, equally urgent goal in dealing with McNaughton. In emails, officials calculated what McNaughton was costing them to keep his crippling disease at bay and how much they would save if they forced him to undergo a cheaper treatment that had already failed him. As the family pressed the company to back down, first through Penn State and then through a lawsuit, the United officials handling the case bristled.

Monday, February 6, 2023

How Far Is Too Far? Crossing Boundaries in Therapeutic Relationships

Gloria Umali
American Professional Agency
Risk Management Report
January 2023

While there appears to be a clear understanding of what constitutes a boundary violation, defining the boundary remains challenging as the line can be ambiguous with often no right or wrong answer. The APA Ethical Principles and Code of Conduct (2017) (“Ethics Code”) provides guidance on boundary and relationship questions to guide Psychologists toward an ethical course of action. The Ethics Code states that relationships which give rise to the potential for exploitation or harm to the client, or those that impair objectivity in judgment, must be avoided.

Boundary crossing, if allowed to progress, may hurt both the therapist and the client.  The good news is that a consensus exists among professionals in the mental health community that there are boundary crossings which are unquestionably considered helpful and therapeutic to clients. However, with no straightforward formula to delineate between helpful boundaries and harmful or unhealthy boundaries, the resulting ‘grey area’ creates challenges for most psychologists. Examining the general public’s perception and understanding of what an unhealthy boundary crossing looks like may provide additional insight on the right ethical course of action, including the impact of boundary crossing on relationships on a case-by-case basis. 

(cut)

Conclusion

Attaining and maintaining healthy boundaries is a goal that all psychologists should work toward while providing supportive therapy services to clients. Strong and consistent boundaries build trust and make therapy safe for both the client and the therapist. Building healthy boundaries not only promotes compliance with the Ethics Code, but also lets clients know you have their best interest in mind. In summation, while concerns for a client’s wellbeing can cloud judgement, the use of both the risk considerations above and the APA Ethical Principles of Psychologists and Code of Conduct, can assist in clarifying the boundary line and help provide a safe and therapeutic environment for all parties involved. 


A good risk management reminder for psychologists.

Sunday, February 5, 2023

I’m a psychology expert in Finland, the No. 1 happiest country in the world—here are 3 things we never do

Frank Martela
CNBC.com
Originally posted 5 Jan 23

For five years in a row, Finland has ranked No. 1 as the happiest country in the world, according to the World Happiness Report. 

In 2022′s report, people in 156 countries were asked to “value their lives today on a 0 to 10 scale, with the worst possible life as a 0.” It also looks at factors that contribute to social support, life expectancy, generosity and absence of corruption.

As a Finnish philosopher and psychology researcher who studies the fundamentals of happiness, I’m often asked: What exactly makes people in Finland so exceptionally satisfied with their lives?

To maintain a high quality of life, here are three things we never do:

1. We don’t compare ourselves to our neighbors.

Focus more on what makes you happy and less on looking successful. The first step to true happiness is to set your own standards, instead of comparing yourself to others.

2. We don’t overlook the benefits of nature.

Spending time in nature increases our vitality, well-being and a gives us a sense of personal growth. Find ways to add some greenery to your life, even if it’s just buying a few plants for your home.

3. We don’t break the community circle of trust.

Think about how you can show up for your community. How can you create more trust? How can you support policies that build upon that trust? Small acts like opening doors for strangers or giving up a seat on the train makes a difference, too.

Saturday, February 4, 2023

What makes Voldemort tick? Children's and adults' reasoning about the nature of villains

V.A. Umscheid, C.E. Smith, et al.
Cognition
Volume 233, April 2023, 105357

Abstract

How do children make sense of antisocial acts committed by evil-doers? We addressed this question in three studies with 434 children (4–12 years) and 277 adults, focused on participants' judgments of both familiar and novel fictional villains and heroes. Study 1 established that children viewed villains' actions and emotions as overwhelmingly negative, suggesting that children's well-documented positivity bias does not prevent their appreciation of extreme forms of villainy. Studies 2 and 3 assessed children's and adults' beliefs regarding heroes' and villains' moral character and true selves, using an array of converging evidence, including: how a character felt inside, whether a character's actions reflected their true self, whether a character's true self could change over time, and how an omniscient machine would judge a character's true self. Across these measures, both children and adults consistently evaluated villains' true selves to be more negative than heroes'. Importantly, at the same time, we also detected an asymmetry in the judgments, wherein villains were more likely than heroes to have a true self that differed from their outward behavior. More specifically, across the ages studied participants more often reported that villains were inwardly good, than that heroes were inwardly bad. Implications, limitations, and directions for future research are discussed in light of our expanding understanding of the development of true self beliefs.

General discussion

What do young children understand about the nature of antisocial individuals, and how does this understanding change with development? We examined this question in three studies, asking children aged
4–10 to predict how villains—and in comparison, heroes—behave when given a chance to engage in a range of anti- and prosocial behaviors (in Study 1) and to think about their deeper underlying villainy in terms of characters’ moral character and true selves (in Studies 2 and 3). The present research is distinctive in its focus on what children understand about truly wicked familiar individuals, notably well-known villains in children’s films, and distinctive in asking about not only their behaviors, but also asking about their inner emotional responses and underlying goodness/badness. Moreover, we examined the limits of villains’ antisociality, via the scenarios involving the pets and ‘kindred spirits’ of villains in Study 1, and via scenarios involving omniscient true-self machines and magic pills in Study 3. The research also provides new, strong, and consistent evidence by examining a broad range of theoretically grounded evil behaviors and beliefs, and in charting these beliefs across early to middle childhood.

Taken together, findings from all three studies show that children ages 4–10 firmly understand that villainous individuals are prone to callous and antisocial behavior, have deeply mean personalities, and are
less likely than heroes to engage in prosocial behavior. At the same time, although they grasp the essential villainy of villains, children tend to be somewhat more positive about villains than adults. Three additional
findings are worthy of emphasis. First, children demonstrated a nuanced view of villains; many who consistently predicted cruel behavior in villains also expected that villains would treat those in their inner circle (pets and fellow villains) with less cruelty. Second, even young children went beyond noting behavioral tendencies, indicating that villains were deeply mean individuals in their underlying true selves and emotional responses, not just their behaviors. And third, there was an asymmetry in participants’ (both children’s and adults’) judgments regarding individuals’ true selves, wherein villains were more often viewed as having a good true self, than heroes were judged as having a bad true self. For children, villains’ true selves were less mean than might be expected from their mean behaviors and villainous identities, but rarely shaded into niceness itself. For adults this was also often true, but consistent with the literature on adults’ true self beliefs (De Freitas et al., 2017;Newman et al., 2014; Strohminger et al., 2017), adults often indicted a belief that even villains might be deep-down nice in certain circumstance. 

Friday, February 3, 2023

Contraceptive Coverage Expanded: No More ‘Moral’ Exemptions for Employers

Ari Blaff
Yahoo News
Originally posted 30 JAN 23

Here is an excerpt:

The proposed new rule released today by the Departments of Health and Human Services (HHS), Labor, and Treasury would remove the ability of employers to opt out for “moral” reasons, but it would retain the existing protections on “religious” grounds.

For employees covered by insurers with religious exemptions, the new policy will create an “independent pathway” that permits them to access contraceptives through a third-party provider free of charge.

“We had to really think through how to do this in the right way to satisfy both sides, but we think we found that way,” a senior HHS official told CNN.

Planned Parenthood applauded the announcement. “Employers and universities should not be able to dictate personal health-care decisions and impose their views on their employees or students,” the organization’s chief, Alexis McGill Johnson, told CNN. “The ACA mandates that health insurance plans cover all forms of birth control without out-of-pocket costs. Now, more than ever, we must protect this fundamental freedom.”

In 2018, the Trump administration sought to carve out an exception, based on “sincerely held religious beliefs,” to the ACA’s contraceptive mandate. The move triggered a Pennsylvania district court judge to issue a nationwide injunction in 2019, blocking the implementation of the change. However, in 2020, in Little Sisters of the Poor v. Pennsylvania, the Supreme Court, in a 7–2 ruling, defended the legality of the original Trump policy.

The Supreme Court’s overturning of Roe v. Wade in June 2022, in its Dobbs ruling, played a role in HHS’s decision to release the new proposal. Guaranteeing access to contraceptions at no cost to the individual “is a national public health imperative,” HHS said in the proposal. And the Dobbs ruling “has placed a heightened importance on access to contraceptive services nationwide.”

Thursday, February 2, 2023

Yale Changes Mental Health Policies for Students in Crisis

William Wan
The Washington Post
Originally posted 18 JAN 23

Here are some excerpts:

In interviews with The Post, several students — who relied on Yale’s health insurance — described losing access to therapy and health care at the moment they needed it most.

The policy changes announced Wednesday reversed many of those practices.

By allowing students in mental crisis to take a leave of absence rather than withdraw, they will continue to have access to health insurance through Yale, university officials said. They can continue to work as a student employee, meet with career advisers, have access to campus and use library resources.

Finding a way to allow students to retain health insurance required overcoming significant logistical and financial hurdles, Lewis said, since New Haven and Connecticut are where most health providers in Yale’s system are located. But under the new policies, students on leave can switch to “affiliate coverage,” which would cover out-of-network care in other states.

In recent weeks, students and mental advocates questioned why Yale would not allow students struggling with mental health issues to take fewer classes. The new policies will now allow students to drop their course load to as low as two classes under special circumstances. But students can do so only if they require significant time for treatment and if their petition is approved.

In the past, withdrawn students had to submit an application for reinstatement, which included letters of recommendation, and proof they had remained “constructively occupied” during their time away. Under new policies, students returning from a medical leave of absence will submit a “simplified reinstatement request” that includes a letter from their clinician and a personal statement explaining why they left, the treatment they received and why they feel ready to return.

<cut>

In their updated online policies, the university made clear it still retained the right to impose an involuntary medical leave on students in cases of “a significant risk to the student’s health or safety, or to the health or safety of others.”

The changes were announced one day before Yale officials were scheduled to meet for settlement talks with the group of current and former students who filed a proposed class-action lawsuit against the university, demanding policy changes. 

<cut>

In a statement, one of the plaintiffs — a nonprofit group called Elis for Rachael, led by former Yale students — said they are still pushing for more to be done: “We remain in negotiations. We thank Yale for this first step. But if Yale were to receive a grade for its work on mental health, it would be an incomplete at best.”

But after decades of mental health advocacy with little change at the university, some students said they were surprised at the changes Yale has made already.

“I really didn’t think it would happen during my time here,” said Akweley Mazarae Lartey, a senior at Yale who has advocated for mental rights throughout his time at the school. 

“I started thinking of all the situations that I and people I care for have ended up in and how much we could have used these policies sooner.”

Wednesday, February 1, 2023

Ethics Consult: Keep Patient Alive Due to Spiritual Beliefs?

Jacob M. Appel
MedPageToday
Originally posted 28 Jan 23

Welcome to Ethics Consult -- an opportunity to discuss, debate (respectfully), and learn together. We select an ethical dilemma from a true, but anonymized, patient care case. You vote on your decision in the case and, next week, we'll reveal how you all made the call. Bioethicist Jacob M. Appel, MD, JD, will also weigh in with an ethical framework to help you learn and prepare.

The following case is adapted from Appel's 2019 book, Who Says You're Dead? Medical & Ethical Dilemmas for the Curious & Concerned.

Alexander is a 49-year-old man who comes to a prominent teaching hospital for a heart transplant. While awaiting the transplant, he is placed on a machine called a BIVAD, or biventricular assist device -- basically, an artificial heart the size of a small refrigerator to tide him over until a donor heart becomes available. While awaiting a heart, he suffers a severe stroke.

The doctors tell his wife, Katie, that no patient who has suffered such a severe stroke has ever regained consciousness and that Alexander is no longer a candidate for transplant. They would like to turn off the BIVAD and allow nature to take its course.

Not lost on these doctors is that Alexander occupies a desperately needed ICU bed, which could benefit other patients, and that his care costs the healthcare system upwards of $10,000 a day. The doctors are also aware than Alexander could survive for years on the BIVAD and the other machines that are now helping to keep him alive: a ventilator and a dialysis machine.

Katie refuses to yield to the request. "I realize he has no chance of recovery," she says. "But Alexander believed deeply in reincarnation. What mattered most to him was that he die at the right moment -- so that his soul could return to Earth in the body for which it was destined. To him, that would have meant keeping him on the machines until all brain function ceases, even if it means decades. I feel obligated to honor those wishes."