Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Autonomy. Show all posts
Showing posts with label Autonomy. Show all posts

Sunday, May 8, 2022

MAID Without Borders? Oregon Drops the Residency Requirement

Nancy Berlinger
The Hastings Center: Bioethics
Originally posted 1 APR 22

Oregon, which legalized medical aid-in-dying (MAID) in 1997, has dropped the requirement that had limited MAID access to residents of the state. Under a settlement of a lawsuit filed in federal court by the advocacy group Compassion & Choices, Oregon public health officials will no longer apply or enforce this requirement as part of eligibility criteria for MAID.  The lawsuit was filed on behalf of an Oregon physician who challenged the state’s residency requirement and its consequences for his patients in neighboring Washington State.

In Oregon and in nine other jurisdictions – California, Colorado, the District of Columbia, Hawaii, Maine, New Jersey, New Mexico, Vermont, and Washington – with Oregon-type provisions (Montana has related but distinct case law), MAID eligibility criteria include being an adult with a life expectancy of six months or less; the capacity to make a voluntary medical decision; and the ability to self-administer lethal medication prescribed by a physician for the purpose of ending life. Because hospice eligibility criteria also include a six-month prognosis, all people who are eligible for MAID are already hospice-eligible, and most people who seek to use a provision are enrolled in hospice.

The legal and practical implications of this policy change are not yet known and are potentially complex. Advocates have called attention to potential legal risks associated with traveling to Oregon to gain access to MAID. For example, a family member or friend who accompanies a terminally ill person to Oregon could be liable under the laws of their state of residence for “assisting a suicide.”

What are the ethical and social implications of this policy change? Here are some preliminary thoughts:

First, it is unlikely that many people will travel to Oregon from states without MAID provisions. MAID is used by extremely small numbers of terminally ill people, and Oregon’s removal of its residency requirement did not change the multistep evaluation process to determine eligibility. To relocate to another state for the weeks that this process takes would not be practicable or financially feasible for many terminally ill, usually older, adults who are already receiving hospice care.

Friday, April 29, 2022

Navy Deputizes Psychologists to Enforce Drug Rules Even for Those Seeking Mental Health Help

Konstantin Toropin
MilitaryTimes.com
Originally posted 18 APR 22

In the wake of reports that a Navy psychologist played an active role in convicting for drug use a sailor who had reached out for mental health assistance, the service is standing by its policy, which does not provide patients with confidentiality and could mean that seeking help has consequences for service members.

The case highlights a set of military regulations that, in vaguely defined circumstances, requires doctors to inform commanding officers of certain medical details, including drug tests, even if those tests are conducted for legitimate medical reasons necessary for adequate care. Allowing punishment when service members are looking for help could act as a deterrent in a community where mental health is still a taboo topic among many, despite recent leadership attempts to more openly discuss getting assistance.

On April 11, Military.com reported the story of a sailor and his wife who alleged that the sailor's command, the destroyer USS Farragut, was retaliating against him for seeking mental health help.

Jatzael Alvarado Perez went to a military hospital to get help for his mental health struggles. As part of his treatment, he was given a drug test that came back positive for cannabinoids -- the family of drugs associated with marijuana. Perez denies having used any substances, but the test resulted in a referral to the ship's chief corpsman.

Perez's wife, Carli Alvarado, shared documents with Military.com that were evidence in the sailor's subsequent nonjudicial punishment, showing that the Farragut found out about the results because the psychologist emailed the ship's medical staff directly, according to a copy of the email.

"I'm not sure if you've been tracking, but OS2 Alvarado Perez popped positive for cannabis while inpatient," read the email, written to the ship's medical chief. Navy policy prohibits punishment for a positive drug test when administered as part of regular medical care.

The email goes on to describe efforts by the psychologist to assist in obtaining a second test -- one that could be used to punish Perez.

"We are working to get him a command directed urinalysis through [our command] today," it added.

Wednesday, March 16, 2022

Autonomy and the Folk Concept of Valid Consent

Demaree-Cotton, J., & Sommers, R. 
(2021, August 17). 
https://doi.org/10.31234/osf.io/p4w8g

Abstract

Consent governs innumerable everyday social interactions, including sex, medical exams, the use of property, and economic transactions. Yet little is known about how ordinary people reason about the validity of consent. Across the domains of sex, medicine, and police entry, Study 1 showed that when agents lack autonomous decision-making capacities, participants are less likely to view their consent as valid; however, failing to exercise this capacity and deciding in a nonautonomous way did not reduce consent judgments. Study 2 found that specific and concrete incapacities reduced judgments of valid consent, but failing to exercise these specific capacities did not, even when the consenter makes an irrational and inauthentic decision. Finally, Study 3 showed that the effect of autonomy on judgments of valid consent carries important downstream consequences for moral reasoning about the rights and obligations of third parties, even when the consented-to action is morally wrong. Overall, these findings suggest that laypeople embrace a normative, domain-general concept of valid consent that depends consistently on the possession of autonomous capacities, but not on the exercise of these capacities. Autonomous decisions and autonomous capacities thus play divergent roles in moral reasoning about consent interactions: while the former appears relevant for assessing the wrongfulness of consented-to acts, the latter plays a role in whether consent is regarded as authoritative and therefore as transforming moral rights.

Conclusion 

Before these studies, it remained an open possibility that “valid consent” as a rich and normatively complex force existed only as a technical concept used in philosophical, legal and academic domains. We found, however, that the folk concept of consent involves normative distinctions between valid and invalid consent that are sensitive to the consenter’s autonomy, even if the linguistic utterance of “yes” is held constant, and that this concept plays an important role in moral reasoning. 

Specifically, the studies presented here examined the relationship between autonomy and intuitive judgments of valid consent in several domains: medical procedures, sexual relations, police searches, and agreements between buyers and sellers.  Across scenarios, we found that judgments of valid consent carried a specific relationship to autonomy: whether an agent possesses the mental capacity to make decisions in an autonomous way has a consistent impact on whether their consent is regarded as valid, and thus whether it was regarded as morally transformative of the rights and obligations of the consenter and of third parties.  Yet, whether the agent in fact makes their decision in an autonomous, rational way—based on their own authentic values and what is right for them—has little impact on perceptions of consent or associated rights, although it has relevance for whether the consent-obtainer is acting wrongly.  Autonomy thus has a subtle role in the ordinary reasoning about morally transformative consent, where consent given by an agent with autonomous capacities has a distinctive role in downstream moral reasoning.

Saturday, February 5, 2022

Can Brain Organoids Be ‘Conscious’? Scientists May Soon Find Out

Anil Seth
Wired.com
Originally posted 20 DEC 21

Here is an excerpt:

The challenge here is that we are still not sure how to define consciousness in a fully formed human brain, let alone in a small cluster of cells grown in a lab. But there are some promising avenues to explore. One prominent candidate for a brain signature of consciousness is its response to a perturbation. If you stimulate a conscious brain with a pulse of energy, the electrical echo will reverberate in complex patterns over time and space. Do the same thing to an unconscious brain and the echo will be very simple—like throwing a stone into still water. The neuroscientist Marcello Massimini and his team at the University of Milan have used this discovery to detect residual or “covert” consciousness in behaviorally unresponsive patients with severe brain injury. What happens to brain organoids when stimulated this way remains unknown—and it is not yet clear how the results might be interpreted.

As brain organoids develop increasingly similar dynamics to those observed in conscious human brains, we will have to reconsider both what we take to be reliable brain signatures of consciousness in humans, and what criteria we might adopt to ascribe consciousness to something made not born.

The ethical implications of this are obvious. A conscious organoid might consciously suffer and we may never recognize its suffering since it cannot express anything.

Wednesday, January 19, 2022

On the Harm of Imposing Risk of Harm.

Maheshwari, K. (2021)
Ethic Theory Moral Prac 24, 965–980

Abstract

What is wrong with imposing pure risks, that is, risks that don’t materialize into harm? According to a popular response, imposing pure risks is pro tanto wrong, when and because risk itself is harmful. Call this the Harm View. Defenders of this view make one of the following two claims. On the Constitutive Claim, pure risk imposition is pro tanto wrong when and because risk constitutes diminishing one’s well-being viz. preference-frustration or setting-back their legitimate interest in autonomy. On the Contingent Claim, pure risk imposition is pro tanto wrong when and because risk has harmful consequences for the risk-bearers, such as psychological distress. This paper argues that the Harm View is plausible only on the Contingent Claim, but fails on the Constitutive Claim. In discussing the latter, I argue that both the preference and autonomy account fail to show that risk itself is constitutively harmful and thereby wrong. In discussing the former, I argue that risk itself is contingently harmful and thereby wrong but only in a narrow range of cases. I conclude that while the Harm View can sometimes explain the wrong of imposing risk when (and because) risk itself is contingently harmful, it is unsuccessful as a general, exhaustive account of what makes pure imposition wrong.

Conclusions

In this paper, I have engaged in a detailed discussion of a prominent view in the ethics of risk imposition, namely the Harm View. I’ve argued that the Harm View is plausible only on the Contingent Claim, but fails on the Constitutive Claim. In discussing the Constitutive Claim, I’ve argued that the preference and autonomy accounts as construed by Finkelstein (2003) and Oberdiek (2017), respectively fail to show that risk itself is constitutively harmful, and thereby wrong. In vindicating the idea that risk itself is constitutively harmful, both accounts are found guilty of either trivializing or undermining the moral significance of risk, or admit to having counter-intuitive implications in cases where risks materialize. In discussing the Contingent Claim, I’ve argued that risk itself is contingently harmful and thereby wrong only in a narrow range of cases. This makes the Harm View explanatorily limited in scope, thereby undermining its plausibility as a general, exhaustive account of what makes pure imposition wrong.

Sunday, November 14, 2021

A brain implant that zaps away negative thoughts

Nicole Karlis
Salon.com
Originally published 14 OCT 21

Here is an excerpt:

Still, the prospect of clinicians manipulating and redirecting one's thoughts, using electricity, raises potential ethical conundrums for researchers — and philosophical conundrums for patients. 

"A person implanted with a closed-loop system to target their depressive episodes could find themselves unable to experience some depressive phenomenology when it is perfectly normal to experience this outcome, such as a funeral," said Frederic Gilbert Ph.D. Senior Lecturer in Ethics at the University of Tasmania, in an email to Salon. "A system program to administer a therapeutic response when detecting a specific biomarker will not capture faithfully the appropriateness of some context; automated invasive systems implanted in the brain might constantly step up in your decision-making . . . as a result, it might compromise you as a freely thinking agent."

Gilbert added there is the potential for misuse — and that raises novel moral questions. 

"There are potential degrees of misuse of some of the neuro-data pumping out of the brain (some believe these neuro-data may be our hidden and secretive thoughts)," Gilbert said. "The possibility of biomarking neuronal activities with AI introduces the plausibility to identify a large range of future applications (e.g. predicting aggressive outburst, addictive impulse, etc). It raises questions about the moral, legal and medical obligations to prevent foreseeable and harmful behaviour."

For these reasons, Gilbert added, it's important "at all costs" to "keep human control in the loop," in both activation and control of one's own neuro-data. 

Sunday, October 24, 2021

Evaluating Tradeoffs between Autonomy and Wellbeing in Supported Decision Making

Veit, W., Earp, B.D., Browning, H., Savulescu, J.
American Journal of Bioethics 
https://www.researchgate.net/publication/354327526 

A core challenge for contemporary bioethics is how to address the tension between respecting an individual’s autonomy and promoting their wellbeing when these ideals seem to come into conflict (Notini  et  al.  2020).  This  tension  is  often  reflected  in  discussions  of  the  ethical  status  of guardianship and other surrogate decision-making regimes for individuals with different kinds or degrees of cognitive ability and (hence) decision-making capacity (Earp and Grunt-Mejer 2021), specifically when these capacities are regarded as diminished or impaired along certain dimensions (or with respect to certain domains). The notion or practice of guardianship, wherein a guardian is legally appointed to make decisions on behalf of someone with different/diminished capacities, has been particularly controversial. For example, many people see guardianship as unjust, taking too  much  decisional  authority  away  from  the  person  under  the  guardian’s  care  (often  due  to prejudiced attitudes, as when people with certain disabilities are wrongly assumed to lack decision-making capacity); and as too rigid, for example, in making a blanket judgment about someone’s (lack of) capacity, thereby preventing them from making decisions even in areas where they have the requisite abilities (Glen 2015).

It is  against  this  backdrop that  Peterson,  Karlawish, and  Largent (2021) offer  a  useful philosophical framework for the notion of ‘supported decision-making’ as a compelling alternative for  individuals  with  ‘dynamic  impairments’  (i.e.,  non-static  or  domain-variant  perceived mpairments  in  decision-making  capacity).  In  a  similar spirit,  we  have  previously  argued  that bioethics would benefit from a more case-sensitive rather than a ‘one-size-fits-all’ approach when it comes to issues of cognitive diversity (Veit et al. 2020; Chapman and Veit 2020). We therefore agree with most of the authors’ defence of supported decision-making, as this approach allows for case- and context-sensitivity. We also agree with the authors that the categorical condemnation of guardianships  or  similar  arrangements  is  not  justified,  as  this  precludes  such  sensitivity.  For instance, as the authors note, if a patient is in a permanent unaware/unresponsive state – i.e., with no  current  or  foreseeable  decision-making  capacity  or  ability  to  exercise  autonomy  –  then  a guardianship-like regime may be the most appropriate means of promoting this person’s interests. A similar point can be made in relation to debates about intended human enhancement of embryos and children.  Although some critics  claim that  such interventions  violate the autonomy  of the enhanced person, proponents may argue that respect for autonomy and consent do not apply in certain cases, for example, when dealing with embryos (see Veit 2018); alternatively, they may argue that interventions to enhance the (future) autonomy of a currently pre-autonomous (or partially autonomous) being can be justified on an enhancement framework without falling prey to such objections (see Earp 2019, Maslen et al. 2014). 

Saturday, October 9, 2021

Nudgeability: Mapping Conditions of Susceptibility to Nudge Influence

de Ridder, D., Kroese, F., & van Gestel, L. (2021). 
Perspectives on psychological science 
Advance online publication. 
https://doi.org/10.1177/1745691621995183

Abstract

Nudges are behavioral interventions to subtly steer citizens' choices toward "desirable" options. An important topic of debate concerns the legitimacy of nudging as a policy instrument, and there is a focus on issues relating to nudge transparency, the role of preexisting preferences people may have, and the premise that nudges primarily affect people when they are in "irrational" modes of thinking. Empirical insights into how these factors affect the extent to which people are susceptible to nudge influence (i.e., "nudgeable") are lacking in the debate. This article introduces the new concept of nudgeability and makes a first attempt to synthesize the evidence on when people are responsive to nudges. We find that nudge effects do not hinge on transparency or modes of thinking but that personal preferences moderate effects such that people cannot be nudged into something they do not want. We conclude that, in view of these findings, concerns about nudging legitimacy should be softened and that future research should attend to these and other conditions of nudgeability.

From the General Discussion

Finally, returning to the debates on nudging legitimacy that we addressed at the beginning of this article, it seems that concerns should be softened insofar as nudges do impose choice without respecting basic ethical requirements for good public policy. More than a decade ago, philosopher Luc Bovens (2009) formulated the following four principles for nudging to be legitimate: A nudge should allow people to act in line with their overall preferences; a nudge should not induce a change in preferences that would not hold under nonnudge conditions; a nudge should not lead to “infantilization,” such that people are no longer capable of making autonomous decisions; and a nudge should be transparent so that people have control over being in a nudge situation. With the findings from our review in mind, it seems that these legitimacy requirements are fulfilled. Nudges do allow people to act in line with their overall preferences, nudges allow for making autonomous decisions insofar as nudge effects do not depend on being in a System 1 mode of thinking, and making the nudge transparent does not compromise nudge effects.

Monday, August 30, 2021

Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?

Lara, F. 
Sci Eng Ethics 27, 42 (2021). 
https://doi.org/10.1007/s11948-021-00318-5

Abstract

Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.

From the Conclusion

The key in moral education is that it be pursued while respecting and promoting personal autonomy. Educators should avoid the mistake of limiting the capacities of individuals to freely and reflectively determine their own values by attempting to enhance their behaviour directly. On the contrary, they must do what they can to ensure that those being educated, at least at an advanced age, actively participate in this process in order to assume the values that will define them and give meaning to their lives. The problem with current proposals for moral enhancement through new technologies is that they treat the subject of their interventions as a "passive recipient". Moral bioenhancement does so because it aims to change the motivation of the individual by bypassing the reflection and gradual assimilation of values that should accompany any adoption of new identity traits. This constitutes a passivity that would also occur in proposals for moral AIenhancement based on ethical machines that either replace humans in decision-making, or surreptitiously direct them to do the right thing, or simply advise them based on their own supposedly undisputed values.

Sunday, August 22, 2021

America’s long history of anti-science has dangerously undermined the COVID vaccine

Peter Hotez
The Dallas Morning News
Originally published 15 Aug 21

Here is an excerpt:

America’s full-throated enthusiasm for vaccines lasted until the early 2000s. The 1998 Lancet publication of a paper from Andrew Wakefield and his colleagues, which wrongly asserted that the measles virus in the MMR vaccine replicated in the colons of children to cause pervasive developmental disorder (autism), ushered in a new era of distrust for vaccine.

It also resulted in distrust for the U.S. Health and Human Services agencies promoting vaccinations. The early response from the Centers for Disease Control and Prevention was to dismiss growing American discontent for vaccines as a fringe element, until eventually in the 2010s anti-vaccine sentiment spread across the internet.

The anti-vaccine movement eventually adopted medical freedom and used it to gain strength and accelerate in size, internet presence and external funding. Rising out of the American West, anti-vaccine proponents insisted that only parents could make vaccine choices and they were prepared to resist government requirements for school entry or attendance.

In California, the notion of vaccine choice gained strength in the 2010s, leading to widespread philosophical exemptions to childhood MMR vaccines and other immunizations. Vaccine exemptions reached critical mass, ultimately culminating in a 2014–2015 measles epidemic in Orange County.

The outbreak prompted state government intervention through the introduction of California Senate Bill 277 that eliminated these exemptions and prevented further epidemics, but it also triggered aggressive opposition. Anti-vaccine health freedom groups harassed members of the Legislature and labeled prominent scientists as pharma shills. They touted pseudoscience, claiming that vaccines were toxic, or that natural immunity acquired from the illness was superior and more durable than vaccine-induced immunity.

Health freedom then expanded through newly established anti-vaccine political action committees in Texas and Oklahoma in the Southwest, Oregon in the Pacific Northwest, and Michigan and Ohio in the Midwest, while additional anti-vaccine organizations formed in almost every state.

These groups lobbied state legislatures to promote or protect vaccine exemptions, while working to cloak or obscure classroom or schoolwide disclosures of vaccine exemptions. They also introduced menacing consent forms to portray vaccines as harmful or toxic.

The Texans for Vaccine Choice PAC formed in 2015, helping to accelerate personal belief immunization exemptions to a point where today approximately 72,000 Texas schoolchildren miss vaccines required for school entry and attendance.

Sunday, July 25, 2021

Should we be concerned that the decisions of AIs are inscrutable?

John Zerilli
Psyche.co
Originally published 14 June 21

Here is an excerpt:

However, there’s a danger of carrying reliabilist thinking too far. Compare a simple digital calculator with an instrument designed to assess the risk that someone convicted of a crime will fall back into criminal behaviour (‘recidivism risk’ tools are being used all over the United States right now to help officials determine bail, sentencing and parole outcomes). The calculator’s outputs are so dependable that an explanation of them seems superfluous – even for the first-time homebuyer whose mortgage repayments are determined by it. One might take issue with other aspects of the process – the fairness of the loan terms, the intrusiveness of the credit rating agency – but you wouldn’t ordinarily question the engineering of the calculator itself.

That’s utterly unlike the recidivism risk tool. When it labels a prisoner as ‘high risk’, neither the prisoner nor the parole board can be truly satisfied until they have some grasp of the factors that led to it, and the relative weights of each factor. Why? Because the assessment is such that any answer will necessarily be imprecise. It involves the calculation of probabilities on the basis of limited and potentially poor-quality information whose very selection is value-laden.

But what if systems such as the recidivism tool were in fact more like the calculator? For argument’s sake, imagine a recidivism risk-assessment tool that was basically infallible, a kind of Casio-cum-Oracle-of-Delphi. Would we still expect it to ‘show its working’?

This requires us to think more deeply about what it means for an automated decision system to be ‘reliable’. It’s natural to think that such a system would make the ‘right’ recommendations, most of the time. But what if there were no such thing as a right recommendation? What if all we could hope for were only a right way of arriving at a recommendation – a right way of approaching a given set of circumstances? This is a familiar situation in law, politics and ethics. Here, competing values and ethical frameworks often produce very different conclusions about the proper course of action. There are rarely unambiguously correct outcomes; instead, there are only right ways of justifying them. This makes talk of ‘reliability’ suspect. For many of the most morally consequential and controversial applications of ML, to know that an automated system works properly just is to know and be satisfied with its reasons for deciding.

Saturday, July 24, 2021

Freezing Eggs and Creating Patients: Moral Risks of Commercialized Fertility

E. Reis & S. Reis-Dennis
The Hastings Center Report
Originally published 24 Nov 17

Abstract

There's no doubt that reproductive technologies can transform lives for the better. Infertile couples and single, lesbian, gay, intersex, and transgender people have the potential to form families in ways that would have been inconceivable years ago. Yet we are concerned about the widespread commercialization of certain egg-freezing programs, the messages they propagate about motherhood, the way they blur the line between care and experimentation, and the manipulative and exaggerated marketing that stretches the truth and inspires false hope in women of various ages. We argue that although reproductive technology, and egg freezing in particular, promise to improve women's care by offering more choices to achieve pregnancy and childbearing, they actually have the potential to be disempowering. First, commercial motives in the fertility industry distort women's medical deliberations, thereby restricting their autonomy; second, having the option to freeze their eggs can change the meaning of women's reproductive choices in a way that is limiting rather than liberating.

Here is an excerpt:

Egg banks are offering presumably fertile women a solution for potential infertility that they may never face. These women might pay annual egg-freezing storage rates but never use their eggs. In fact, even if a woman who froze eggs in her early twenties waited until her late thirties to use them, there can be no guarantee that those eggs would produce a viable pregnancy. James A. Grifo, program director of NYU Langone Health Fertility Center, has speculated, “[T]here have been reports of embryos that have been frozen for over 15 years making babies, and we think the same thing is going to be true of eggs.” But the truth is that the technology is so new that neither he nor we know how frozen eggs will hold up over a long period of time.

Some women in their twenties might want to hedge their bets against future infertility by freezing their eggs as a part of an egg-sharing program; others might hope to learn from a simple home test of hormone levels whether their egg supply (ovarian reserve) is low—a relatively rare condition. However, these tests are not foolproof. The ASRM has cautioned against home tests of ovarian reserve for women in their twenties because it may lead to “false reassurance or unnecessary anxiety and concern.” This kind of medicalization of fertility may not be liberating; instead, it will exert undue pressure on women and encourage them to rely on egg freezing over other reproductive options when it is far from guaranteed that those frozen eggs (particularly if the women have the condition known as premature ovarian aging) will ultimately lead to successful pregnancies and births.

Friday, June 25, 2021

Rugged American Individualism is a Myth, and It’s Killing Us

Katherine Wasson
Hastings Center
Originally published 4 June 21

The starkest picture of rugged American individualism is one we learned in school. A family moves West to settle the land and struggles with the elements.  Yet, even in these depictions, settlers needed help to raise a barn or harvest crops. They drew on the help of others and reciprocated in return. In the 21st century few Americans live in any way close to this largely self-sustaining lifestyle. Yet, the myth of rugged individualism is strong and persistent.

The reality for all of us is that none survive or flourish without the help of others. Whether it is within a family, peer group, school, religious institution, or wider community, all of us have been helped by others. Someone somewhere encouraged us, gave us a break or an opportunity, however small.  Some have experienced random acts of kindness from strangers. The myth of rugged individualism, which often means “pulling yourself up by your own bootstraps,” is outdated, was never completely accurate, and is harming us.

Holding tightly to this myth leads to the misperception that an individual can do (or not do) whatever they want in society and no person or, perhaps especially, government entity can tell them otherwise. People say, “As long as my choice doesn’t harm anyone else, I should be able to do what I want.” How they know their action does not harm anyone else is unclear and there are examples from the pandemic where personal choice does harm others. In bioethics we recognize this view as an expression of individual autonomy; the freedom to govern oneself.  Yet, such blinkered views of individual autonomy are misguided and inaccurate. Everyone’s autonomy is limited in society to avoid harm to the self or others. We enforce seatbelt and drunk driving laws to these ends. Moreover, that we rely on others to function in society has been made very clear during the pandemic. We need others to provide food and education, collect our garbage, and conduct the scientific research that informs our knowledge of the virus. These contributions support the common good.

We have seen rugged individualism on full display during the coronavirus pandemic. It can lead to a disregard for the worth and value of others. While many people observed public health restrictions and guidelines, others, including some elected officials, refused to wear masks and are now refusing vaccination. Those who cling to their individualism seem to view such restrictions as unnecessary or unacceptable, an infringement on their individual rights and freedoms. They are not willing to sacrifice a degree of their freedom to protect themselves or others. The result has been 33,264,650 cases and 594,568 deaths in the United States and counting. 

Thursday, June 24, 2021

Updated Physician-Aid-in-Dying Law Sparks Controversy in Canada

Richard Karel
Psychiatric News
Originally posted 27 May 21

Here is an excerpt:

Addressing the changes for people who may be weighing MAID for severe mental illness, the government stated the following:

“If you have a mental illness as your only medical condition, you are not eligible to seek medical assistance in dying. … This temporary exclusion allows the Government of Canada more time to consider how MAID can safely be provided to those whose only medical condition is mental illness.

“To support this work, the government will initiate an expert review to consider protocols, guidance, and safeguards for those with a mental illness seeking MAID and will make recommendations within a year (by March 17, 2022).

“After March 17, 2023, people with a mental illness as their sole underlying medical condition will have access to MAID if they are eligible and the practitioners fulfill the safeguards that are put in place for this group of people. …”

While many physicians and others have long been sympathetic to allowing medical professionals to help those with terminal illness die peacefully, the fear has been that medically assisted death could become a substitute for adequate—and more costly—medical care. Those concerns are growing with the expansion of MAID in Canada.

Sunday, May 16, 2021

Death as Something We Make

Mara Buchbinder
sapiens.org
Originally published 8 April 2021

Here are two excerpts:

While I learned a lot about what drives people to MAID (Medial Aid in Dying), I was particularly fascinated by what MAID does to death. The option transforms death from an object of dread to an anticipated occasion that may be painstakingly planned, staged, and produced. The theatrical imagery is intentional: An assisted death is an event that one scripts, a matter of careful timing, with a well-designed set and the right supporting cast. Through this process, death becomes not just something that happens but also something that is made.

(cut)

MAID renders not only the time of death but also the broader landscape of death open to human control. MAID allows terminally ill patients to choreograph their own deaths, deciding not only when but where and how and with whom. Part of the appeal is that one must go on living right up until the moment of death. It takes work to engage in all the planning; it keeps one vibrant and busy. There are people to call, papers to file, and scenes to set. Making death turns dying into an active extension of life.

Staging death in this way also allows the dying person to sidestep the messiness of death—the bodily fluids and decay—what the sociologist Julia Lawton has called the “dirtiness” of death. MAID makes it possible to attempt a calm, orderly, sanitized death. Some deliberately empty their bladder or bowels in advance, or plan to wear diapers. A “good death,” from this perspective, has not only an ethical but also an aesthetic quality.

Of course, this sort of staging is not without controversy. For some, it represents unwelcome interference with God’s plans. For people like Renee, however, it infuses one’s death with personal meaning and control.

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

Véliz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.

Saturday, September 12, 2020

Psychotherapy, placebos, and informed consent

Leder G
Journal of Medical Ethics 
Published Online First: 20 August 2020.
doi: 10.1136/medethics-2020-106453

Abstract

Several authors have recently argued that psychotherapy, as it is commonly practiced, is deceptive and undermines patients’ ability to give informed consent to treatment. This ‘deception’ claim is based on the findings that some, and possibly most, of the ameliorative effects in psychotherapeutic interventions are mediated by therapeutic common factors shared by successful treatments (eg, expectancy effects and therapist effects), rather than because of theory-specific techniques. These findings have led to claims that psychotherapy is, at least partly, likely a placebo, and that practitioners of psychotherapy have a duty to ‘go open’ to patients about the role of common factors in therapy (even if this risks negatively affecting the efficacy of treatment); to not ‘go open’ is supposed to unjustly restrict patients’ autonomy. This paper makes two related arguments against the ‘go open’ claim. (1) While therapies ought to provide patients with sufficient information to make informed treatment decisions, informed consent does not require that practitioners ‘go open’ about therapeutic common factors in psychotherapy, and (2) clarity about the mechanisms of change in psychotherapy shows us that the common-factors findings are consistent with, rather than undermining of, the truth of many theory-specific forms of psychotherapy; psychotherapy, as it is commonly practiced, is not deceptive and is not a placebo. The call to ‘go open’ should be resisted and may have serious detrimental effects on patients via the dissemination of a false view about how therapy works.

Conclusion

The ‘go open’ argument is based on a mistaken view about the mechanisms of change in psychotherapy and threatens to harm patients by undermining their ability to make informed treatment decisions. This paper has argued that the prima facie ethical problem raised by the ‘go open’ argument is diffused if we clear up a conceptual confusion about what, exactly, we should be
going open about. Therapists should be open with patients about the differing theories of the mechanisms of change in psychotherapy; this can, but need not involve discussing information
about the therapeutic common factors.

The article is here.

Note from Dr. Gavazzi: Using "deception" is the wrong frame for this issue.  How complete is your informed consent?  Can we ever give "perfect" informed consent?  The answer is likely no.

Tuesday, July 7, 2020

Can COVID-19 re-invigorate ethics?

Louise Campbell
BMJ Blogs
Originally posted 26 May 20

The COVID-19 pandemic has catapulted ethics into the spotlight.  Questions previously deliberated about by small numbers of people interested in or affected by particular issues are now being posed with an unprecedented urgency right across the public domain.  One of the interesting facets of this development is the way in which the questions we are asking now draw attention, not just to the importance of ethics in public life, but to the very nature of ethics as practice, namely ethics as it is applied to specific societal and environmental concerns.

Some of these questions which have captured the public imagination were originally debated specifically within healthcare circles and at the level of health policy: what measures must be taken to prevent hospitals from becoming overwhelmed if there is a surge in the number of people requiring hospitalisation?  How will critical care resources such as ventilators be prioritised if need outstrips supply?  In a crisis situation, will older people or people with disabilities have the same opportunities to access scarce resources, even though they may have less chance of survival than people without age-related conditions or disabilities?  What level of risk should healthcare workers be expected to assume when treating patients in situations in which personal protective equipment may be inadequate or unavailable?   Have the rights of patients with chronic conditions been traded off against the need to prepare the health service to meet a demand which to date has not arisen?  Will the response to COVID-19 based on current evidence compromise the capacity of the health system to provide routine outpatient and non-emergency care to patients in the near future?

Other questions relate more broadly to the intersection between health and society: how do we calculate the harms of compelling entire populations to isolate themselves from loved ones and from their communities?  How do we balance these harms against the risks of giving people more autonomy to act responsibly?  What consideration is given to the fact that, in an unequal society, restrictions on liberty will affect certain social groups in disproportionate ways?  What does the catastrophic impact of COVID-19 on residents of nursing homes say about our priorities as a society and to what extent is their plight our collective responsibility?  What steps have been taken to protect marginalised communities who are at greater risk from an outbreak of infectious disease: for example, people who have no choice but to coexist in close proximity with one another in direct provision centres, in prison settings and on halting sites?

The info is here.

Monday, July 6, 2020

HR researchers discovered the real reason why stressful jobs are killing us

Arianne Cohen
fastcompany.com
Originally posted 20 May 20

Your job really might kill you: A new study directly correlates on-the-job stress with death.

Researchers at Indiana University’s Kelley School of Business followed 3,148 Wisconsinites for 20 years and found heavy workload and lack of autonomy to correlate strongly with poor mental health and the big D: death. The study is titled “This Job Is (Literally) Killing Me.”

“When job demands are greater than the control afforded by the job or an individual’s ability to deal with those demands, there is a deterioration of their mental health and, accordingly, an increased likelihood of death,” says lead author Erik Gonzalez-Mulé, assistant professor of organizational behavior and human resources. “We found that work stressors are more likely to cause depression and death as a result of jobs in which workers have little control.”

The reverse was also true: Jobs can fuel good health, particularly jobs that provide workers autonomy.

The info is here.

Tuesday, May 12, 2020

Freedom in an Age of Algocracy

John Danaher
forthcoming in Oxford Handbook on the Philosophy of Technology
edited by Shannon Vallor

Abstract

There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology.

From the Conclusion:

Finally, I’ve outlined a framework for thinking about the likely impact of algocracy on freedom. Given the complexity of freedom and the complexity of algocracy, I’ve argued that there is unlikely to be a simple global assessment of the freedom-promoting or undermining power of algocracy. This is something that has to be assessed and determined on a case-by-case basis. Nevertheless, there are at least five interesting and relatively novel mechanisms through which algocratic systems can both promote and undermine freedom. We should pay attention to these different mechanisms, but do so in a properly contextualized manner, and not by ignoring the pre-existing mechanisms through which freedom is undermined and promoted.

The book chapter is here.