Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 24, 2020

An ethical framework for global vaccine allocation

Emanuel, E. et al.
Science  11 Sep 2020:
Vol. 369, Issue 6509, pp. 1309-1312
DOI: 10.1126/science.abe2803

Once effective coronavirus disease 2019 (COVID-19) vaccines are developed, they will be scarce. This presents the question of how to distribute them fairly across countries. Vaccine allocation among countries raises complex and controversial issues involving public opinion, diplomacy, economics, public health, and other considerations. Nevertheless, many national leaders, international organizations, and vaccine producers recognize that one central factor in this decision-making is ethics. Yet little progress has been made toward delineating what constitutes fair international distribution of vaccine. Many have endorsed “equitable distribution of COVID-19…vaccine” without describing a framework or recommendations. Two substantive proposals for the international allocation of a COVID-19 vaccine have been advanced, but are seriously flawed. We offer a more ethically defensible and practical proposal for the fair distribution of COVID-19 vaccine: the Fair Priority Model.

The Fair Priority Model is primarily addressed to three groups. One is the COVAX facility—led by Gavi, the World Health Organization (WHO), and the Coalition for Epidemic Preparedness Innovations (CEPI)—which intends to purchase vaccines for fair distribution across countries. A second group is vaccine producers. Thankfully, many producers have publicly committed to a “broad and equitable” international distribution of vaccine. The last group is national governments, some of whom have also publicly committed to a fair distribution.

These groups need a clear framework for reconciling competing values, one that they and others will rightly accept as ethical and not just as an assertion of power. The Fair Priority Model specifies what a fair distribution of vaccines entails, giving content to their commitments. Moreover, acceptance of this common ethical framework will reduce duplication and waste, easing efforts at a fair distribution. That, in turn, will enhance producers' confidence that vaccines will be fairly allocated to benefit people, thereby motivating an increase in vaccine supply for international distribution.

Friday, October 23, 2020

Ethical Dimensions of Using Artificial Intelligence in Health Care

Michael J. Rigby
AMA Journal of Ethics
February 2019

An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist. Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities.

Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine, and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists. In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric disease. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.

Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field. While some efforts to engage in these ethical conversations have emerged, the medical community remains ill informed of the ethical complexities that budding AI technology can introduce. Accordingly, a rich discussion awaits that would greatly benefit from physician input, as physicians will likely be interfacing with AI in their daily practice in the near future.

Thursday, October 22, 2020

America Is Being Pulled Apart. Here's How We Can Start to Heal Our Nation

David French
Time Magazine
Originally posted 10 Sept 20

Here is an excerpt:

I’ve been writing and speaking about national polarization and division since before the Trump election. Two years ago, I began writing a book describing our challenge, outlining how we could divide and how we can heal. The prescription isn’t easy. We have to flip the script on the present political narrative. We have to prioritize accommodation.

That means revitalizing the Bill of Rights. America’s worst sins have always included denying fundamental constitutional rights to America’s most vulnerable citizens, those without electoral power. While progress has been made, doctrines like qualified immunity leave countless citizens without recourse when they face state abuse. It alienates citizens from the state and drains confidence in the American republic.

That means diminishing presidential power. A principal reason presidential politics is so toxic is that the diminishing power of states and Congress means that every four years we elect the most powerful peacetime ruler in the history of the U.S. No one person should have so much authority over an increasingly diverse and divided nation.

The increasing stakes of each presidential election increase political tension and heighten public anxiety. Americans should not see their individual liberty or the autonomy of their churches and communities as so dependent on the identity of the President.

But beyond the political changes–more local control, less centralization–Americans need a change of heart. Defending the Bill of Rights requires commitment and effort, and it requires citizens to think of others beyond their partisan tribe. Defending the Bill of Rights means that you must fight for others to have the rights that you would like to exercise yourself. The goal is simple yet elusive. Every American–regardless of race, ethnicity, sex, religion or sexual orientation–can and should have a home in this land.


Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Tuesday, October 20, 2020

What do you believe? Atheism and Religion

Kristen Weir
Monitor on Psychology
Vol. 51, No. 5, p. 52

Here is an excerpt:

Good health isn’t the only positive outcome attributed to religion. Research also suggests that religious belief is linked to prosocial behaviors such as volunteering and donating to charity.

But as with health benefits, Galen’s work suggests such prosocial benefits have more to do with general group membership than with religious belief or belonging to a specific religious group (Social Indicators Research, Vol. 122, No. 2, 2015). In fact, he says, while religious people are more likely to volunteer or give to charitable causes related to their beliefs, atheists appear to be more generous to a wider range of causes and dissimilar groups.

Nevertheless, atheists and other nonbelievers still face considerable stigma, and are often perceived as less moral than their religious counterparts. In a study across 13 countries, Gervais and colleagues found that people in most countries intuitively believed that extreme moral violations (such as murder and mutilation) were more likely to be committed by atheists than by religious believers. This anti-atheist prejudice also held true among people who identified as atheists, suggesting that religious culture exerts a powerful influence on moral judgments, even among non­believers (Nature Human Behaviour, Vol. 1, Article 0151, 2017).

Yet nonreligious people are similar to religious people in a number of ways. In the Understanding Unbelief project, Farias and colleagues found that across all six countries they studied, both believers and nonbelievers cited family and freedom as the most important values in their own lives and in the world more broadly. The team also found evidence to counter a common assumption that atheists believe life has no purpose. They found the belief that the universe is “ultimately meaningless” was a minority view among non­believers in each country.

“People assume that [non­believers] have very different sets of values and ideas about the world, but it looks like they probably don’t,” Farias says.

For the nonreligious, however, meaning may be more likely to come from within than from above. Again drawing on data from the General Social Survey, Speed and colleagues found that in the United States, atheists and the religiously unaffiliated were no more likely to believe that life is meaningless than were people who were religious or raised with a religious affiliation. 

Monday, October 19, 2020

Model-based decision making and model-free learning

Drummond, N. & Niv, Y.
Current Biology
Volume 30, Issue 15, 3 August 2020, 
Pages R860-R865

Summary

Free will is anything but free. With it comes the onus of choice: not only what to do, but which inner voice to listen to — our ‘automatic’ response system, which some consider ‘impulsive’ or ‘irrational’, or our supposedly more rational deliberative one. Rather than a devil and angel sitting on our shoulders, research suggests that we have two decision-making systems residing in the brain, in our basal ganglia. Neither system is the devil and neither is irrational. They both have our best interests at heart and aim to suggest the best course of action calculated through rational algorithms. However, the algorithms they use are qualitatively different and do not always agree on which action is optimal. The rivalry between habitual, fast action and deliberative, purposeful action is an ongoing one.

Sunday, October 18, 2020

Beliefs have a social purpose. Does this explain delusions?

Anna Greenburgh
psyche.co
Originally published 

Here is an excerpt:

Of course, just because a delusion has logical roots doesn’t mean it’s helpful for the person once it takes hold. Indeed, this is why delusions are an important clinical issue. Delusions are often conceptualised as sitting at the extreme end of a continuum of belief, but how can they be distinguished from other beliefs? If not irrationality, then what demarcates a delusion?

Delusions are fixed, unchanging in the face of contrary evidence, and not shared by the person’s peers. In light of the social function of beliefs, these preconditions have added significance. The coalitional model underlines that beliefs arising from adaptive cognitive processes should show some sensitivity to social context and enable successful social coordination. Delusions lack this social function and adaptability. Clinical psychologists have documented the fixity of delusional beliefs: they are more resistant to change than other types of belief, and are intensely preoccupying, regardless of the social context or interpersonal consequences. In both ‘The Yellow Wallpaper’ and the novel Don Quixote (1605-15) by Miguel de Cervantes, the protagonists’ beliefs about their surroundings are unchangeable and, if anything, become increasingly intense and disruptive. It is this inflexibility to social context, once they take hold, that sets delusions apart from other beliefs.

Across the field of mental health, research showing the importance of the social environment has spurred a great shift in the way that clinicians interact with patients. For example, research exposing the link between trauma and psychosis has resulted in more compassionate, person-centred approaches. The coalitional model of delusions can now contribute to this movement. It opens up promising new avenues of research, which integrate our fundamental social nature and the social function of belief formation. It can also deepen how people experiencing delusions are understood – instead of contributing to stigma by dismissing delusions as irrational, it considers the social conditions that gave rise to such intensely distressing beliefs.

Saturday, October 17, 2020

New Texas rule lets social workers turn away clients who are LGBTQ or have a disability

Edgar Walters
Texas Tribune
Originally posted 14 Oct 2020

Texas social workers are criticizing a state regulatory board’s decision this week to remove protections for LGBTQ clients and clients with disabilities who seek social work services.

The Texas State Board of Social Work Examiners voted unanimously Monday to change a section of its code of conduct that establishes when a social worker may refuse to serve someone. The code will no longer prohibit social workers from turning away clients on the basis of disability, sexual orientation or gender identity.

Gov. Greg Abbott’s office recommended the change, board members said, because the code’s nondiscrimination protections went beyond protections laid out in the state law that governs how and when the state may discipline social workers.

“It’s not surprising that a board would align its rules with statutes passed by the Legislature,” said Abbott spokesperson Renae Eze. A state law passed last year gave the governor’s office more control over rules governing state-licensed professions.

The nondiscrimination policy change drew immediate criticism from a professional association. Will Francis, executive director of the Texas chapter of the National Association of Social Workers, called it “incredibly disheartening.”

He also criticized board members for removing the nondiscrimination protections without input from the social workers they license and oversee.


Note: All psychotherapy services are founded on the principle of beneficence: the desire to help others and do right by them.  This decision from the Texas State Board of Social Work Examiners is terrifyingly unethical.  The unanimous decision demonstrates the highest levels of incompetence and bigotry.

Friday, October 16, 2020

When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions

Newman, D., Fast, N. and Harmon, D.
Organizational Behavior and 
Human Decision Processes
Volume 160, September 2020, Pages 149-167

Abstract

The perceived fairness of decision-making procedures is a key concern for organizations, particularly when evaluating employees and determining personnel outcomes. Algorithms have created opportunities for increasing fairness by overcoming biases commonly displayed by human decision makers. However, while HR algorithms may remove human bias in decision making, we argue that those being evaluated may perceive the process as reductionistic, leading them to think that certain qualitative information or contextualization is not being taken into account. We argue that this can undermine their beliefs about the procedural fairness of using HR algorithms to evaluate performance by promoting the assumption that decisions made by algorithms are based on less accurate information than identical decisions made by humans. Results from four laboratory experiments (N = 798) and a large-scale randomized experiment in an organizational setting (N = 1654) confirm this hypothesis. Theoretical and practical implications for organizations using algorithms and data analytics are discussed.

Highlights

• Algorithmic decisions are perceived as less fair than identical decisions by humans.

• Perceptions of reductionism mediate the adverse effect of algorithms on fairness.

• Algorithmic reductionism comes in two forms: quantification and decontextualization.

• Employees voice lower organizational commitment when evaluated by algorithms.

• Perceptions of unfairness mediate the adverse effect of algorithms on commitment.

Conclusion

Perceived unfairness notwithstanding, algorithms continue to gain increasing influence in human affairs, not only in organizational settings but throughout our social and personal lives. How this influence plays out against our sense of fairness remains to be seen but should undoubtedly be of central interest to justice scholars in the years ahead.  Will the compilers of analytics and writers of algorithms adapt their
methods to comport with intuitive notions of morality? Or will our understanding of fairness adjust to the changing times, becoming inured to dehumanization in an ever more impersonal world? Questions
such as these will be asked more and more frequently as technology reshapes modes of interaction and organization that have held sway for generations. We have sought to contribute answers to these questions,
and we hope that our work will encourage others to continue studying these and related topics.