Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Autonomy. Show all posts
Showing posts with label Autonomy. Show all posts

Thursday, March 7, 2024

Canada Postpones Plan to Allow Euthanasia for Mentally Ill

Craig McCulloh
Voice of America News
Originally posted 8 Feb 24

The Canadian government is delaying access to medically assisted death for people with mental illness.

Those suffering from mental illness were supposed to be able to access Medical Assistance in Dying — also known as MAID — starting March 17. The recent announcement by the government of Canadian Prime Minister Justin Trudeau was the second delay after original legislation authorizing the practice passed in 2021.

The delay came in response to a recommendation by a majority of the members of a committee made up of senators and members of Parliament.

One of the most high-profile proponents of MAID is British Columbia-based lawyer Chris Considine. In the mid-1990s, he represented Sue Rodriguez, who was dying from amyotrophic lateral sclerosis, commonly known as ALS.

Their bid for approval of a medically assisted death was rejected at the time by the Supreme Court of Canada. But a law passed in 2016 legalized euthanasia for individuals with terminal conditions. From then until 2022, more than 45,000 people chose to die.


Summary:

Canada originally planned to expand its Medical Assistance in Dying (MAiD) program to include individuals with mental illnesses in March 2024.
  • This plan has been postponed until 2027 due to concerns about the healthcare system's readiness and potential ethical issues.
  • The original legislation passed in 2021, but concerns about safeguards and mental health support led to delays.
  • This issue is complex and ethically charged, with advocates arguing for individual autonomy and opponents raising concerns about coercion and vulnerability.
I would be concerned about the following issues:
  • Vulnerability: Mental illness can impair judgement, raising concerns about informed consent and potential coercion.
  • Safeguards: Concerns exist about insufficient safeguards to prevent abuse or exploitation.
  • Mental health access: Limited access to adequate mental health treatment could contribute to undue pressure towards MAiD.
  • Social inequalities: Concerns exist about disproportionate access to MAiD based on socioeconomic background.

Monday, March 4, 2024

How to Deal with Counter-Examples to Common Morality Theory: A Surprising Result

Herissone-Kelly P.
Cambridge Quarterly of Healthcare Ethics.
2022;31(2):185-191.
doi:10.1017/S096318012100058X

Abstract

Tom Beauchamp and James Childress are confident that their four principles—respect for autonomy, beneficence, non-maleficence, and justice—are globally applicable to the sorts of issues that arise in biomedical ethics, in part because those principles form part of the common morality (a set of general norms to which all morally committed persons subscribe). Inevitably, however, the question arises of how the principlist ought to respond when presented with apparent counter-examples to this thesis. I examine a number of strategies the principlist might adopt in order to retain common morality theory in the face of supposed counter-examples. I conclude that only a strategy that takes a non-realist view of the common morality’s principles is viable. Unfortunately, such a view is likely not to appeal to the principlist.


Herissone-Kelly examines various strategies principlism could employ to address counter-examples:

Refine the principles: This involves clarifying or reinterpreting the principles to better handle specific cases.
  • Prioritize principles: Establish a hierarchy among the principles to resolve conflicts.
  • Supplement the principles: Introduce additional considerations or context-specific factors.
  • Limit the scope: Acknowledge that the principles may not apply universally to all cultures or situations.
Herissone-Kelly argues that none of these strategies are fully satisfactory. Refining or prioritizing principles risks distorting their original meaning or introducing arbitrariness. Supplementing them can lead to an unwieldy and complex framework. Limiting their scope undermines the theory's claim to universality.

He concludes that the most viable approach is to adopt a non-realist view of the common morality's principles. This means understanding them not as objective moral facts but as flexible tools for ethical reflection and deliberation, open to interpretation and adaptation in different contexts. While this may seem to weaken the theory's authority, Herissone-Kelly argues that it allows for a more nuanced and practical application of ethical principles in a diverse world.

Monday, February 5, 2024

Should Patients Be Allowed to Die From Anorexia? Is a 'Palliative' Approach to Mental Illness Ethical?

Katie Engelhart
New York Times Magazine
Originally posted 3 Jan 24

Here is an excerpt:

He came to think that he had been impelled by a kind of professional hubris — a hubris particular to psychiatrists, who never seemed to acknowledge that some patients just could not get better. That psychiatry had actual therapeutic limits. Yager wanted to find a different path. In academic journals, he came across a small body of literature, mostly theoretical, on the idea of palliative psychiatry. The approach offered a way for him to be with patients without trying to make them better: to not abandon the people who couldn’t seem to be fixed. “I developed this phrase of ‘compassionate witnessing,’” he told me. “That’s what priests did. That’s what physicians did 150 years ago when they didn’t have any tools. They would just sit at the bedside and be with somebody.”

Yager believed that a certain kind of patient — maybe 1 or 2 percent of them — would benefit from entirely letting go of standard recovery-oriented care. Yager would want to know that such a patient had insight into her condition and her options. He would want to know that she had been in treatment in the past, not just once but several times. Still, he would not require her to have tried anything and everything before he brought her into palliative care. Even a very mentally ill person, he thought, was allowed to have ideas about what she could and could not tolerate.

If the patient had a comorbidity, like depression, Yager would want to know that it was being treated. Maybe, for some patients, treating their depression would be enough to let them keep fighting. But he wouldn’t insist that a person be depression-free before she left standard treatment. Not all depression can be cured, and many people are depressed and make decisions for themselves every day. It would be Yager’s job to tease out whether what the patient said she wanted was what she authentically desired, or was instead an expression of pathological despair. Or more: a suicidal yearning. Or something different: a cry for help. That was always part of the job: to root around for authenticity in the morass of a disease.


Some thoughts:

The question of whether patients with anorexia nervosa should be allowed to die from their illness or receive palliative care is a complex and emotionally charged one, lacking easy answers. It delves into the profound depths of autonomy, mental health, and the very meaning of life itself.

The Anorexic's Dilemma:

Anorexia nervosa is a severe eating disorder characterized by a relentless pursuit of thinness and an intense fear of weight gain. It often manifests in severe food restriction, excessive exercise, and distorted body image. This relentless control, however, comes at a devastating cost. Organ failure, malnutrition, and even death can be the tragic consequences of the disease's progression.

Palliative Care: Comfort Not Cure:

Palliative care focuses on symptom management and improving quality of life for individuals with life-threatening illnesses. In the context of anorexia, it would involve addressing physical discomfort, emotional distress, and spiritual concerns, but without actively aiming for weight gain or cure. This raises numerous ethical and practical questions:
  • Respecting Autonomy: Does respecting a patient's autonomy mean allowing them to choose a path that may lead to death, even if their decision is influenced by a mental illness?
  • The Line Between Choice and Coercion: How do we differentiate between a genuine desire for death and succumbing to the distorted thinking patterns of anorexia?
  • Futility vs. Hope: When is treatment considered futile, and when should hope for recovery, however slim, be prioritized?
Finding the Middle Ground:

There's no one-size-fits-all answer to this intricate dilemma. Each case demands individual consideration, taking into account the patient's mental capacity, level of understanding, and potential for recovery. Open communication, involving the patient, their family, and a multidisciplinary team of healthcare professionals, is crucial in navigating this sensitive terrain.

Potential Approaches:
  • Enhanced Supportive Care: Focusing on improving the patient's quality of life through pain management, emotional support, and addressing underlying psychological issues.
  • Conditional Palliative Care: Providing palliative care while continuing to offer and encourage life-sustaining treatment, with the possibility of transitioning back to active recovery if the patient shows signs of willingness.
  • Advance Directives: Encouraging patients to discuss their wishes and preferences beforehand, allowing for informed decision-making when faced with difficult choices.

Thursday, January 25, 2024

Listen, explain, involve, and evaluate: why respecting autonomy benefits suicidal patients

Samuel J. Knapp (2024)
Ethics & Behavior, 34:1, 18-27
DOI: 10.1080/10508422.2022.2152338

Abstract

Out of a concern for keeping suicidal patients alive, some psychotherapists may use hard persuasion or coercion to keep them in treatment. However, more recent evidence-supported interventions have made respect for patient autonomy a cornerstone, showing that the effective interventions that promote the wellbeing of suicidal patients also prioritize respect for patient autonomy. This article details how psychotherapists can incorporate respect for patient autonomy in the effective treatment of suicidal patients by listening to them, explaining treatments to them, involving them in decisions, and inviting evaluations from them on the process and progress of their treatment. It also describes how processes that respect patient autonomy can supplement interventions that directly address some of the drivers of suicide.

Public Impact Statement

Treatments for suicidal patients have improved in recent years, in part, because they emphasize promoting patient autonomy. This article explains why respecting patient autonomy is important in the treatment of suicidal patients and how psychotherapists can integrate respect for patient autonomy in their treatments.


Dr. Knapp's article discusses the importance of respecting patient autonomy in the treatment of suicidal patients within the framework of principle-based ethics. It highlights the ethical principles of beneficence, nonmaleficence, justice, respecting patient autonomy, and professional-patient relationships. The article emphasizes the challenges psychotherapists face in balancing the promotion of patient well-being with the need to respect autonomy, especially when dealing with suicidal patients.

Fear and stress in treating suicidal patients may lead psychotherapists to prioritize more restrictive interventions, potentially disregarding the importance of patient autonomy. The article argues that actions minimizing respect for patient autonomy may reflect a paternalistic attitude, which is implementing interventions without patient consent for the sake of well-being.

The problems associated with paternalistic interventions are discussed, emphasizing the importance of patients' internal motivation to change. The article advocates for autonomy-focused interventions, such as cognitive behavior therapy and dialectical behavior therapy, which have been shown to reduce suicide risk and improve outcomes. It suggests that involving patients in treatment decisions, listening to their experiences, and validating their feelings contribute to more effective interventions.

The article provides recommendations on how psychotherapists can respect patient autonomy, including listening carefully to patients, explaining treatment processes, involving patients in decisions, and inviting them to evaluate their progress. The ongoing nature of the informed consent process is stressed, along with the benefits of incorporating patient feedback into treatment. The article concludes by acknowledging the need for a balance between beneficence and respect for patient autonomy, particularly in cases of imminent danger, where temporary prioritization of beneficence may be necessary.

In summary, the article underscores the significance of respecting patient autonomy in the treatment of suicidal patients and provides practical guidance for psychotherapists to achieve this while promoting patient well-being.

Saturday, November 25, 2023

An autonomy-based approach to assisted suicide: a way to avoid the expressivist objection against assisted dying laws

Braun, E.
Journal of Medical Ethics 
2023;49:497-501

Abstract

In several jurisdictions, irremediable suffering from a medical condition is a legal requirement for access to assisted dying. According to the expressivist objection, allowing assisted dying for a specific group of persons, such as those with irremediable medical conditions, expresses the judgment that their lives are not worth living. While the expressivist objection has often been used to argue that assisted dying should not be legalised, I show that there is an alternative solution available to its proponents. An autonomy-based approach to assisted suicide regards the provision of assisted suicide (but not euthanasia) as justified when it is autonomously requested by a person, irrespective of whether this is in her best interests. Such an approach has been put forward by a recent judgment of the German Federal Constitutional Court, which understands assisted suicide as an expression of the person’s right to a self-determined death. It does not allow for beneficence-based restrictions regarding the person’s suffering or medical diagnosis and therefore avoids the expressivist objection. I argue that on an autonomy-based approach, assisted suicide should not be understood as a medical procedure but rather as the person’s autonomous action.

Conclusion

Assuming that the expressivist argument is valid, it only applies to (partly) beneficence-based approaches to assisted dying that require irremediable suffering. An autonomy-based approach to assisted suicide, as put forward by the German Federal Constitutional Court, avoids the expressivist objection. It understands
assisted suicide as an act justified by autonomy and does not imply objective judgments of whether the person’s life is worth living. I have argued that on an autonomy-based approach, assisted suicide should not be understood as a medical intervention but rather as an autonomous action that does not invoke
traditional medical principles such as beneficence.


Said differently: 

The article argues that an autonomy-based approach to assisted suicide can avoid the expressivist objection against assisted dying laws. The expressivist objection is the claim that assisted dying laws send the message that suicide is a good thing, which could lead to more people committing suicide. The author argues that this objection is not valid because autonomy is a fundamental value that should be respected, even if it means allowing people to die.  (Autonomy > beneficence)

Sunday, September 24, 2023

Consent GPT: Is It Ethical to Delegate Procedural Consent to Conversational AI?

Allen, J., Earp, B., Koplin, J. J., & Wilkinson, D.

Abstract

Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (e.g. junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence (AI) using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients’ access to the relevant procedural information and therefore enhance informed decision-making.

In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.

-------------

Here are some additional points from the article:
  • The authors argue that the current system of delegating procedural consent to human consent-takers is not always effective, as consent-takers may lack sufficient time or clinical knowledge to adequately promote patient autonomy and informed decision-making.
  • They suggest that LLMs could be used to provide patients with more comprehensive and accurate information about procedures, and to answer patients' questions in a way that is tailored to their individual needs.
  • However, the authors also acknowledge that there are a number of ethical concerns that need to be addressed before LLMs can be used for procedural consent. These include concerns about bias, accuracy, and patient trust.

Thursday, August 24, 2023

The Limits of Informed Consent for an Overwhelmed Patient: Clinicians’ Role in Protecting Patients and Preventing Overwhelm

J. Bester, C.M. Cole, & E. Kodish.
AMA J Ethics. 2016;18(9):869-886.
doi: 10.1001/journalofethics.2016.18.9.peer2-1609.

Abstract

In this paper, we examine the limits of informed consent with particular focus on ways in which various factors can overwhelm decision-making capacity. We introduce overwhelm as a phenomenon commonly experienced by patients in clinical settings and distinguish between emotional overwhelm and informational overload. We argue that in these situations, a clinician’s primary duty is prevention of harm and suggest ways in which clinicians can discharge this obligation. To illustrate our argument, we consider the clinical application of genetic sequencing testing, which involves scientific and technical information that can compromise the understanding and decisional capacity of most patients. Finally, we consider and rebut objections that this could lead to paternalism.

(cut)

Overwhelm and Information Overload

The claim we defend is a simple one: there are medical situations in which the information involved in making a decision is of such a nature that the decision-making capacity of a patient is overwhelmed by the sheer complexity or volume of information at hand. In such cases a patient cannot attain the understanding necessary for informed decision making, and informed consent is therefore not possible. We will support our thesis regarding informational overload by focusing specifically on the area of clinical whole genome sequencing—i.e., identification of an individual’s entire genome, enabling the identification and interaction of multiple genetic variants—as distinct from genetic testing, which tests for specific genetic variants.

We will first present ethical considerations regarding informed consent. Next, we will present three sets of factors that can burden the capacity of a patient to provide informed consent for a specific decision—patient, communication, and information factors—and argue that these factors may in some circumstances make it impossible for a patient to provide informed consent. We will then discuss emotional overwhelm and informational overload and consider how being overwhelmed affects informed consent. Our interest in this essay is mainly in informational overload; we will therefore consider whole genome sequencing as an example in which informational factors overwhelm a patient’s decision-making capacity. Finally, we will offer suggestions as to how the duty to protect patients from harm can be discharged when informed consent is not possible because of emotional overwhelm or informational overload.

(cut)

How should clinicians respond to such situations?

Surrogate decision making. One possible solution to the problem of informed consent when decisional capacity is compromised is to seek a surrogate decision maker. However, in situations of informational overload, this may not solve the problem. If the information has inherent qualities that would overwhelm a reasonable patient, it is likely to also overwhelm a surrogate. Unless the surrogate decision maker is a content expert who also understands the values of the patient, a surrogate decision maker will not solve the problem of informed consent. Surrogate decision making may, however, be useful for the emotionally overwhelmed patient who remains unable to provide informed consent despite additional support.

Shared decision making. Another possible solution is to make use of shared decision making (SDM). This approach relies on deliberation between clinician and patient regarding available health care choices, taking the best evidence into account. The clinician actively involves the patient and elicits patient values. The goal of SDM is often stated as helping patients arrive at informed decisions that respect what matters most to them.

It is not clear, however, that SDM will be successful in facilitating informed decisions when an informed consent process has failed. SDM as a tool for informed decision making is at its core dependent on the patient understanding the options presented and being able to describe the preferred option. Understanding and deliberating about what is at stake for each option is a key component of this use of SDM. Therefore, if the medical information is so complex that it overloads the patient’s decision-making capacity, SDM is unlikely to achieve informed decision making. But if a patient is emotionally overwhelmed by the illness experience and all that accompanies it, a process of SDM and support for the patient may eventually facilitate informed decision making.

Friday, May 12, 2023

‘Mind-reading’ AI: Japan study sparks ethical debate

David McElhinney
Aljazeera.com
Originally posted 7 APR 203

Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.

“We really didn’t expect this kind of result,” Takagi said.

Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.

“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”

“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”


Note: If AI systems can decode human thoughts, it could infringe upon people's privacy and autonomy. There are concerns that this technology could be used for invasive surveillance or to manipulate people's thoughts and behavior. Additionally, there are concerns about how this technology could be used in legal proceedings and whether it violates human rights.

Tuesday, February 28, 2023

Transformative experience and the right to revelatory autonomy

Farbod Akhlaghi
Analysis
Originally Published: 31 December 2022

Abstract

Sometimes it is not us but those to whom we stand in special relations that face transformative choices: our friends, family or beloved. A focus upon first-personal rational choice and agency has left crucial ethical questions regarding what we owe to those who face transformative choices largely unexplored. In this paper I ask: under what conditions, if any, is it morally permissible to interfere to try to prevent another from making a transformative choice? Some seemingly plausible answers to this question fail precisely because they concern transformative experiences. I argue that we have a distinctive moral right to revelatory autonomy grounded in the value of autonomous self-making. If this right is outweighed then, I argue, interfering to prevent another making a transformative choice is permissible. This conditional answer lays the groundwork for a promising ethics of transformative experience.

Conclusion

Ethical questions regarding transformative experiences are morally urgent. A complete answer to our question requires ascertaining precisely how strong the right to revelatory autonomy is and what competing considerations can outweigh it. These are questions for another time, where the moral significance of revelation and self-making, the competing weight of moral and non-moral considerations, and the sense in which some transformative choices are more significant to one’s identity and self-making than others must be further explored.

But to identify the right to revelatory autonomy and duty of revelatory non-interference is significant progress. For it provides a framework to address the ethics of transformative experience that avoids complications arising from the epistemic peculiarities of transformative experiences. It also allows us to explain cases where we are permitted to interfere in another’s transformative choice and why interference in some choices is harder to justify than others, whilst recognizing plausible grounds for the right to revelatory autonomy itself in the moral value of autonomous self-making. This framework, moreover, opens novel avenues of engagement with wider ethical issues regarding transformative experience, for example concerning social justice or surrogate transformative choice-making. It is, at the very least, a view worthy of further consideration.


This reasoning applies to psychologists in psychotherapy.  Unless significant danger is present, psychologists need to avoid intrusive advocacy, meaning pulling autonomy away from the patient.  Soft paternalism can occur in psychotherapy, when trying to avoid significant harm.

Monday, January 30, 2023

Abortion Access Tied to Suicide Rates Among Young Women

Michael DePeau-Wilson
MedPage Today
Originally posted 28 DEC 22

Restrictions on access to reproductive care were associated with suicide rates among women of reproductive age, researchers found.

In a longitudinal ecologic study using state-based data from 1974 to 2016, enforcement of Targeted Regulation of Abortion Providers (TRAP) laws was associated with higher suicide rates among reproductive-age women (β=0.17, 95% CI 0.03-0.32, P=0.02) but not among women of post-reproductive age, according to Ran Barzilay, MD, PhD, of the University of Pennsylvania in Philadelphia, and colleagues.

Nor was enforcement of TRAP laws associated with deaths due to motor vehicle crashes, they reported in JAMA Psychiatry in a new tab or window.

Additionally, enforcement of a TRAP law was associated with a 5.81% higher annual rate of suicide than in pre-enforcement years, the researchers found.

"Taken together, the results suggest that the association between restricting access to abortion and suicide rates is specific to the women who are most affected by this restriction, which are young women," Barzilay told MedPage Today.

Barzilay said their study "can inform, number one, clinicians working with young women to be aware that this is a macro-level suicide risk factor in this population. And number two, that it informs policymakers as they allocate resources for suicide prevention. And number three, that it informs the ethical, divisive debate regarding access to abortion."

In an accompanying editorial, Tyler VanderWeele, PhD, of Harvard T.H. Chan School of Public Health in Boston, wrote that while analyses of this type are always subject to the possibility of changes in trends being attributable to some third factor, Barzilay and colleagues did "control for a number of reasonable candidates and conducted sensitivity analyses indicating that these associations were observed for reproductive-aged women but not for a control group of older women of post-reproductive age."

VanderWeele wrote the findings do suggest that a "not inconsiderable" number of women might be dying by suicide in part because of a lack of access to abortion services, and that "the increase is cause for clinical concern."

But while more research "might contribute more to our understanding," VanderWeele wrote, its role in the legal debates around abortion "seems less clear. Regardless of whether one is looking at potential adverse effects of access restrictions or of abortion, the abortion and mental health research literature will not resolve the more fundamental and disputed moral questions."

"Debates over abortion access are likely to remain contentious in this country and others," he wrote. "However, further steps can nevertheless be taken in finding common ground to promote women's mental health and healthcare."

For their "difference-in-differences" analysis, Barzilay and co-authors relied on data from the TRAP laws index to measure abortion access, and assessed suicide data from CDC's WONDER database in a new tab or window database.

Saturday, January 7, 2023

Artificial intelligence and consent: a feminist anti-colonial critique

Varon, J., & Peña, P. (2021). 
Internet Policy Review, 10(4).
https://doi.org/10.14763/2021.4.1602

Abstract

Feminist theories have extensively debated consent in sexual and political contexts. But what does it mean to consent when we are talking about our data bodies feeding artificial intelligence (AI) systems? This article builds a feminist and anti-colonial critique about how an individualistic notion of consent is being used to legitimate practices of the so-called emerging Digital Welfare States, focused on digitalisation of anti-poverty programmes. The goal is to expose how the functional role of digital consent has been enabling data extractivist practices for control and exclusion, another manifestation of colonialism embedded in cutting-edge digital technology.

Here is an excerpt:

Another important criticism of this traditional idea of consent in sexual relationships is the forced binarism of yes/no. According to Gira Grant (2016), consent is not only given but also is built from multiple factors such as the location, the moment, the emotional state, trust, and desire. In fact, for this author, the example of sex workers could demonstrate how desire and consent are different, although sometimes confused as the same. For her there are many things that sex workers do without necessarily wanting to. However, they give consent for legitimate reasons.

It is also important how we express consent. For feminists such as Fraisse (2012), there is no consent without the body. In other words, consent has a relational and communication-based (verbal and nonverbal) dimension where power relationships matter (Tinat, 2012; Fraisse, 2012). This is very relevant when we discuss “tacit consent” in sexual relationships. In another dimension of how we express consent, Fraisse (2012) distinguishes between choice (the consent that is accepted and adhered to) and coercion (the "consent" that is allowed and endured).

According to Fraisse (2012), the critical view of consent that is currently claimed by feminist theories is not consent as a symptom of contemporary individualism; it has a collective approach through the idea of “the ethics of consent”, which provides attention to the "conditions" of the practice; the practice adapted to a contextual situation, therefore rejecting universal norms that ignore the diversified conditions of domination (Fraisse, 2012).

In the same sense, Lucia Melgar (2012) asserts that, in the case of sexual consent, it is not just an individual right, but a collective right of women to say "my body is mine" and from there it claims freedom to all bodies. As Sarah Ahmed (2017, n.p.) states “for feminism: no is a political labor”. In other words, “if your position is precarious you might not be able to afford no. [...] This is why the less precarious might have a political obligation to say no on behalf of or alongside those who are more precarious”. Referring to Éric Fassin, Fraisse (2012) understands that in this feminist view, consent will not be “liberal” anymore (as a refrain of the free individual), but “radical”, because, as Fassin would call, seeing in a collective act, it could function as some sort of consensual exchange of power.

Saturday, December 3, 2022

Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence

Nussberger, A. M., Luo, L., Celis, L. E., 
& Crockett, M. J. (2022). 
Nature communications, 13(1), 5821.

Abstract

As Artificial Intelligence (AI) proliferates across important social institutions, many of the most powerful AI systems available are difficult to interpret for end-users and engineers alike. Here, we sought to characterize public attitudes towards AI interpretability. Across seven studies (N = 2475), we demonstrate robust and positive attitudes towards interpretable AI among non-experts that generalize across a variety of real-world applications and follow predictable patterns. Participants value interpretability positively across different levels of AI autonomy and accuracy, and rate interpretability as more important for AI decisions involving high stakes and scarce resources. Crucially, when AI interpretability trades off against AI accuracy, participants prioritize accuracy over interpretability under the same conditions driving positive attitudes towards interpretability in the first place: amidst high stakes and scarce resources. These attitudes could drive a proliferation of AI systems making high-impact ethical decisions that are difficult to explain and understand.


Discussion

In recent years, academics, policymakers, and developers have debated whether interpretability is a fundamental prerequisite for trust in AI systems. However, it remains unknown whether non-experts–who may ultimately comprise a significant portion of end-users for AI applications–actually care about AI interpretability, and if so, under what conditions. Here, we characterise public attitudes towards interpretability in AI across seven studies. Our data demonstrates that people consider interpretability in AI to be important. Even though these positive attitudes generalise across a host of AI applications and show systematic patterns of variation, they also seem to be capricious. While people valued interpretability as similarly important for AI systems that directly implemented decisions and AI systems recommending a course of action to a human (Study 1A), they valued interpretability more for applications involving higher (relative to lower) stakes and for applications determining access to scarce (relative to abundant) resources (Studies 1A-C, Study 2). And while participants valued AI interpretability across all levels of AI accuracy when considering the two attributes independently (Study 3A), they sacrificed interpretability for accuracy when these two attributes traded off against one another (Studies 3B–C). Furthermore, participants favoured accuracy over interpretability under the same conditions that drove importance ratings of interpretability in the first place: when stakes are high and resources are scarce.

Our findings highlight that high-stakes applications, such as medical diagnosis, will generally be met with enhanced requirements towards AI interpretability. Notably, this sensitivity to stakes parallels magnitude-sensitivity as a foundational process in the cognitive appraisal of outcomes. The impact of stakes on attitudes towards interpretability were apparent not only in our experiments that manipulated stakes within a given AI-application, but also in absolute and relative levels of participants’ valuation of interpretability across applications–take, for instance, ‘hurricane first aid’ and ‘vaccine allocation’ outperforming ‘hiring decisions’, ‘insurance pricing’, and ‘standby seat prioritizing’. Conceivably, this ordering would also emerge if we ranked the applications according to the scope of auditing- and control-measures imposed on human executives, reflecting interpretability’s essential capacity of verifying appropriate and fair decision processes.

Monday, November 21, 2022

AI Isn’t Ready to Make Unsupervised Decisions

Joe McKendrick and Andy Thurai
Harvard Business Review
Originally published September 15, 2022

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

Consider the “trolley problem” — a hypothetical social scenario, formulated long before AI came into being, in which a decision has to be made whether to alter the route of an out-of-control streetcar heading towards a disaster zone. The decision that needs to be made — in a split second — is whether to switch from the original track where the streetcar may kill several people tied to the track, to an alternative track where, presumably, a single person would die.

While there are many other analogies that can be made about difficult decisions, the trolley problem is regarded to be the pinnacle exhibition of ethical and moral decision making. Can this be applied to AI systems to measure whether AI is ready for the real world, in which machines can think independently, and make the same ethical and moral decisions, that are justifiable, that humans would make?

Trolley problems in AI come in all shapes and sizes, and decisions don’t necessarily need to be so deadly — though the decisions AI renders could mean trouble for a business, individual, or even society at large. One of the co-authors of this article recently encountered his own AI “trolley moment,” during a stay in an Airbnb-rented house in upstate New Hampshire. Despite amazing preview pictures and positive reviews, the place was poorly maintained and a dump with condemned adjacent houses. The author was going to give the place a low one-star rating and a negative review, to warn others considering a stay.

However, on the second morning of the stay, the host of the house, a sweet and caring elderly woman, knocked on the door, inquiring if the author and his family were comfortable and if they had everything they needed. During the conversation, the host offered to pick up some fresh fruits from a nearby farmers market. She also said she doesn’t have a car, she would walk a mile to a friend’s place, who would then drive her to the market. She also described her hardships over the past two years, as rentals slumped due to Covid and that she is caring for someone sick full time.

Upon learning this, the author elected not to post the negative review. While the initial decision — to write a negative review — was based on facts, the decision not to post the review was purely a subjective human decision. In this case, the trolley problem was concern for the welfare of the elderly homeowner superseding consideration for the comfort of other potential guests.

How would an AI program have handled this situation? Likely not as sympathetically for the homeowner. It would have delivered a fact-based decision without empathy for the human lives involved.

Tuesday, September 27, 2022

Beyond individualism: Is there a place for relational autonomy in clinical practice and research?

Dove, E. S., Kelly, S. E., et al. (2017).
Clinical Ethics, 12(3), 150–165.
https://doi.org/10.1177/1477750917704156

Abstract

The dominant, individualistic understanding of autonomy that features in clinical practice and research is underpinned by the idea that people are, in their ideal form, independent, self-interested and rational gain-maximising decision-makers. In recent decades, this paradigm has been challenged from various disciplinary and intellectual directions. Proponents of ‘relational autonomy’ in particular have argued that people’s identities, needs, interests – and indeed autonomy – are always also shaped by their relations to others. Yet, despite the pronounced and nuanced critique directed at an individualistic understanding of autonomy, this critique has had very little effect on ethical and legal instruments in clinical practice and research so far. In this article, we use four case studies to explore to what extent, if at all, relational autonomy can provide solutions to ethical and practical problems in clinical practice and research. We conclude that certain forms of relational autonomy can have a tangible and positive impact on clinical practice and research. These solutions leave the ultimate decision to the person most affected, but encourage and facilitate the consideration of this person’s care and responsibility for connected others.

From the Discussion section

Together, these cases show that in our quest to enhance the practical value of the concept of relational autonomy in healthcare and research, we must be careful not to remove the patient or participant from the centre of decision-making. At the same time, we should acknowledge that the patient’s decision to consent (or refuse) to treatment or research can be augmented by facilitating and encouraging that her relations to, and responsibility for, others are considered in decision-making processes. Our case studies do not suggest that we should expand consent requirements to others per se, such as family members or community elders – that is, to add the requirement of seeking consent from further individuals who may also be seen as having a stake in the decision. Such a position would undermine the idea that the person who is centrally affected by a decision should typically have the final say in what happens with and to her, or her body, or even her data. As long as this general principle respects all legal exceptions (see below), we believe that it is a critical underpinning of fundamental respect for persons that should not done away with. Moreover, expanding consent or requiring consent to include others (however so defined) undermines the main objective of relational autonomy, which is to foreground the relational aspect of human identities and interests, and not merely to expand the range of individuals who need to give consent to a procedure. An approach that merely extends consent requirements to other people does not foreground relations but rather presumptions about who the relevant others of a person are.

Wednesday, July 6, 2022

What happens if you want access to voluntary assisted dying but your nursing home won’t let you?

Neera Bhatia & Charles Corke
The Conversation
Originally published 30 MAY 22

Voluntary assisted dying is now lawful in all Australian states. There is also widespread community support for it.

Yet some residential institutions, such as hospices and aged-care facilities, are obstructing access despite the law not specifying whether they have the legal right to do so.

As voluntary assisted dying is implemented across the country, institutions blocking access to it will likely become more of an issue.

So addressing this will help everyone – institutions, staff, families and, most importantly, people dying in institutions who wish to have control of their end.

The many ways to block access
While voluntary assisted dying legislation recognises the right of doctors to conscientiously object to it, the law is generally silent on the rights of institutions to do so.
  • While the institution where someone lives has no legislated role in voluntary assisted dying, it can refuse access in various ways, including:
  • restricting staff responding to a discussion a resident initiates about voluntary assisted dying
  • refusing access to health professionals to facilitate it, and
  • requiring people who wish to pursue the option to leave the facility.
(cut)

What could we do better?

1. Institutions need to be up-front about their policies

Institutions need to be completely open about their policies on voluntary assisted dying and whether they would obstruct any such request in the future. This is so patients and families can factor this into deciding on an institution in the first place.

2. Institutions need to consult their stakeholders

Institutions should consult their stakeholders about their policy with a view to creating a “safe” environment for residents and staff – for those who want access to voluntary assisted dying or who wish to support it, and for those who don’t want it and find it confronting.

3. Laws need to change

Future legislation should define the extent of an institution’s right to obstruct a resident’s right to access voluntary assisted dying.

Monday, June 27, 2022

Confidence in U.S. Supreme Court Sinks to Historic Low

Jeffrey Jones
Gallup.com
Originally posted 23 JUN 22

Story Highlights
  • 25% of Americans have confidence in Supreme Court, down from 36% in 2021
  • Current reading is five percentage points lower than prior record low
  • Confidence is down among Democrats and independents this year
With the U.S. Supreme Court expected to overturn the 1973 Roe v. Wade decision before the end of its 2021-2022 term, Americans' confidence in the court has dropped sharply over the past year and reached a new low in Gallup's nearly 50-year trend. Twenty-five percent of U.S. adults say they have "a great deal" or "quite a lot" of confidence in the U.S. Supreme Court, down from 36% a year ago and five percentage points lower than the previous low recorded in 2014.

These results are based on a June 1-20 Gallup poll that included Gallup's annual update on confidence in U.S. institutions. The survey was completed before the end of the court's term and before it issued its major rulings for that term. Many institutions have suffered a decline in confidence this year, but the 11-point drop in confidence in the Supreme Court is roughly double what it is for most institutions that experienced a decline. Gallup will release the remainder of the confidence in institutions results in early July.

The Supreme Court is likely to issue a ruling in the Dobbs v. Jackson Women's Health Organization case before its summer recess. The decision will determine the constitutionality of a Mississippi law that would ban most abortions after 15 weeks of pregnancy. A leaked draft majority opinion in the case suggests that the high court will not only allow the Mississippi law to stand, but also overturn Roe v. Wade, the 1973 court ruling that prohibits restrictions on abortion during the first trimester of pregnancy. Americans oppose overturning Roe by a nearly 2-to-1 margin.

In September, Gallup found the Supreme Court's job approval rating at a new low and public trust in the judicial branch of the federal government down sharply. These changes occurred after the Supreme Court declined to block a Texas law banning most abortions after six weeks of pregnancy, among other controversial decisions at that time. Given these prior results, it is unclear if the drop in confidence in the Supreme Court measured in the current poll is related to the anticipated Dobbs decision or had occurred several months before the leak.

Thursday, June 23, 2022

Thousands of Medical Professionals Urge Supreme Court To Uphold Roe: ‘Provide Patients With the Treatment They Need’

Phoebe Kolbert
Ms. Magazine
Originally posted 21 JUN 22

Any day now, the Supreme Court will issue its decision in Dobbs v. Jackson Women’s Health Organization, which many predict will overturn or severely gut Roe v. Wade. Since the start of the Dobbs v. Jackson hearings in December, medical professionals have warned of the drastic health impacts brought on by abortion bans. Now, over 2,500 healthcare professionals from all 50 states have signed a letter urging the Supreme Court to scrap their leaked Dobbs draft opinion and uphold Roe.  

Within 30 days of a decision to overturn Roe, at least 26 states will ban abortion. Clinics in remaining pro-abortion states are preparing for increased violence from anti-abortion extremists and an influx of out-of-state patients. The number of legal abortions performed nationwide is projected to fall by about 13 percent. Many abortion clinics in states with bans will be forced to close their doors, if they haven’t already. The loss of these clinics also comes with the loss of the other essential reproductive healthcare they provide, including STI screenings and treatment, birth control and cervical cancer screenings.

The letter, titled “Medical Professionals Urge Supreme Court to Uphold Roe v. Wade, Protect Abortion Access,” argues that decisions around pregnancy and abortion should be made by patients and their doctors, not the courts.


Here is how the letter begins:

Medical Professionals Urge Supreme Court to Uphold Roe v. Wade, Protect Abortion Access

As physicians and health care professionals, we are gravely concerned that the U.S. Supreme Court appears prepared to end the constitutional right to an abortion. We urge the Supreme Court to to scrap their draft opinion, uphold the constitutional right to an abortion, and ensure that abortions remain legal nationwide, as allowed for in Roe v. Wade. In this moment of crisis, we want to make crystal clear the consequences to our patients’ health if they can no longer access abortions.

Abortions are safe, common and a critical part of health care and reproductive medicine. Medical professionals and medical associations agree, including the American Medical Association, the American College of Obstetricians and Gynecologists, the American Academy of Family Physicians, the American College of Nurse Midwives and many others.

Prohibiting access to safe and legal abortion has devastating implications for health care. Striking down Roe v. Wade would affect not just abortion access, but also maternal care as well as fertility treatments. Pregnancy changes a person’s physiology. These changes can potentially worsen existing diseases and medical conditions.

As physicians and medical professionals, we see the real-life consequences when an individual does not get the care that they know they need, including abortions. The woman who has suffered the violation and trauma of rape would be forced to carry a pregnancy.

Denying access to abortion from people who want one can adversely affect their health, safety and economic well-being, including delayed separation from a violent partner and increased likelihood of falling into poverty by four times. These outcomes can also have drastic impacts on their health.

Friday, June 17, 2022

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

S. Tolmeijer, M. Christen, et al.
In CHI Conference on Human Factors in 
Computing Systems (CHI '22), April 29-May 5,
2022, New Orleans, LA, USA. ACM

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

From the Discussion Section

Design implications for ethical AI

In sum, we find that participants had slightly higher moral trust and more responsibility ascription towards human experts, but higher capacity trust, overall trust, and reliance on AI. These different perceived capabilities could be combined in some form of human-AI collaboration. However, lack of responsibility of the AI can be a problem when AI for ethical decision making is implemented. When a human expert is involved but has less autonomy, they risk becoming a scapegoat for the decisions that the AI proposed in case of negative outcomes.

At the same time, we find that the different levels of autonomy, i.e., the human-in-the-loop and human-on-the-loop setting, did not influence the trust people had, the responsibility they assigned (both to themselves and the respective experts), and the reliance they displayed. A large part of the discussion on usage of AI has focused on control and the level of autonomy that the AI gets for different tasks. However, our results suggest that this has less of an influence, as long a human is appointed to be responsible in the end. Instead, an important focus of designing AI for ethical decision making should be on the different types of trust users show for a human vs. AI expert.

One conclusion of this finding that the control conditions of AI may be of less relevance than expected is that the focus on human-AI collaboration should be less on control and more on how the involvement of AI improves human ethical decision making. An important factor in that respect will be the time available for actual decision making: if time is short, AI advice or decisions should make clear which value was guiding in the decision process (e.g., maximizing the expected number of people to be saved irrespective of any characteristics of the individuals involved), such that the human decider can make (or evaluate) the decision in an ethically informed way. If time for deliberation is available, a AI decision support system could be designed in a way to counteract human biases in ethical decision making (e.g., point to the possibility that human deciders solely focus on utility maximization and in this way neglecting fundamental rights of individuals) such that those biases can become part of the deliberation process.

Sunday, May 8, 2022

MAID Without Borders? Oregon Drops the Residency Requirement

Nancy Berlinger
The Hastings Center: Bioethics
Originally posted 1 APR 22

Oregon, which legalized medical aid-in-dying (MAID) in 1997, has dropped the requirement that had limited MAID access to residents of the state. Under a settlement of a lawsuit filed in federal court by the advocacy group Compassion & Choices, Oregon public health officials will no longer apply or enforce this requirement as part of eligibility criteria for MAID.  The lawsuit was filed on behalf of an Oregon physician who challenged the state’s residency requirement and its consequences for his patients in neighboring Washington State.

In Oregon and in nine other jurisdictions – California, Colorado, the District of Columbia, Hawaii, Maine, New Jersey, New Mexico, Vermont, and Washington – with Oregon-type provisions (Montana has related but distinct case law), MAID eligibility criteria include being an adult with a life expectancy of six months or less; the capacity to make a voluntary medical decision; and the ability to self-administer lethal medication prescribed by a physician for the purpose of ending life. Because hospice eligibility criteria also include a six-month prognosis, all people who are eligible for MAID are already hospice-eligible, and most people who seek to use a provision are enrolled in hospice.

The legal and practical implications of this policy change are not yet known and are potentially complex. Advocates have called attention to potential legal risks associated with traveling to Oregon to gain access to MAID. For example, a family member or friend who accompanies a terminally ill person to Oregon could be liable under the laws of their state of residence for “assisting a suicide.”

What are the ethical and social implications of this policy change? Here are some preliminary thoughts:

First, it is unlikely that many people will travel to Oregon from states without MAID provisions. MAID is used by extremely small numbers of terminally ill people, and Oregon’s removal of its residency requirement did not change the multistep evaluation process to determine eligibility. To relocate to another state for the weeks that this process takes would not be practicable or financially feasible for many terminally ill, usually older, adults who are already receiving hospice care.

Friday, April 29, 2022

Navy Deputizes Psychologists to Enforce Drug Rules Even for Those Seeking Mental Health Help

Konstantin Toropin
MilitaryTimes.com
Originally posted 18 APR 22

In the wake of reports that a Navy psychologist played an active role in convicting for drug use a sailor who had reached out for mental health assistance, the service is standing by its policy, which does not provide patients with confidentiality and could mean that seeking help has consequences for service members.

The case highlights a set of military regulations that, in vaguely defined circumstances, requires doctors to inform commanding officers of certain medical details, including drug tests, even if those tests are conducted for legitimate medical reasons necessary for adequate care. Allowing punishment when service members are looking for help could act as a deterrent in a community where mental health is still a taboo topic among many, despite recent leadership attempts to more openly discuss getting assistance.

On April 11, Military.com reported the story of a sailor and his wife who alleged that the sailor's command, the destroyer USS Farragut, was retaliating against him for seeking mental health help.

Jatzael Alvarado Perez went to a military hospital to get help for his mental health struggles. As part of his treatment, he was given a drug test that came back positive for cannabinoids -- the family of drugs associated with marijuana. Perez denies having used any substances, but the test resulted in a referral to the ship's chief corpsman.

Perez's wife, Carli Alvarado, shared documents with Military.com that were evidence in the sailor's subsequent nonjudicial punishment, showing that the Farragut found out about the results because the psychologist emailed the ship's medical staff directly, according to a copy of the email.

"I'm not sure if you've been tracking, but OS2 Alvarado Perez popped positive for cannabis while inpatient," read the email, written to the ship's medical chief. Navy policy prohibits punishment for a positive drug test when administered as part of regular medical care.

The email goes on to describe efforts by the psychologist to assist in obtaining a second test -- one that could be used to punish Perez.

"We are working to get him a command directed urinalysis through [our command] today," it added.