Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, October 26, 2020

Artificial Intelligence and the Limits of Legal Personality

Chesterman, Simon, (August 28, 2020). 
Forthcoming in 
International & Comparative Law Quarterly
NUS Law Working Paper No. 2020/025

Abstract

As artificial intelligence (AI) systems become more sophisticated and play a larger role in society, arguments that they should have some form of legal personality gain credence. It has been suggested that this will fill an accountability gap created by the speed, autonomy, and opacity of AI. In addition, a growing body of literature considers the possibility of AI systems owning the intellectual property that they create. The arguments are typically framed in instrumental terms, with comparisons to juridical persons such as corporations. Implicit in those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans they should be entitled to a status comparable to natural persons. This article contends that although most legal systems could create a novel category of legal persons, such arguments are insufficient to show that they should.

Sunday, October 25, 2020

The objectivity illusion and voter polarization in the 2016 presidential election

M. C. Schwalbe, G. L. Cohen, L. D. Ross
PNAS Sep 2020, 117 (35) 21218-21229; 

Abstract

Two studies conducted during the 2016 presidential campaign examined the dynamics of the objectivity illusion, the belief that the views of “my side” are objective while the views of the opposing side are the product of bias. In the first, a three-stage longitudinal study spanning the presidential debates, supporters of the two candidates exhibited a large and generally symmetrical tendency to rate supporters of the candidate they personally favored as more influenced by appropriate (i.e., “normative”) considerations, and less influenced by various sources of bias than supporters of the opposing candidate. This study broke new ground by demonstrating that the degree to which partisans displayed the objectivity illusion predicted subsequent bias in their perception of debate performance and polarization in their political attitudes over time, as well as closed-mindedness and antipathy toward political adversaries. These associations, furthermore, remained significant even after controlling for baseline levels of partisanship. A second study conducted 2 d before the election showed similar perceptions of objectivity versus bias in ratings of blog authors favoring the candidate participants personally supported or opposed. These ratings were again associated with polarization and, additionally, with the willingness to characterize supporters of the opposing candidate as evil and likely to commit acts of terrorism. At a time of particular political division and distrust in America, these findings point to the exacerbating role played by the illusion of objectivity.

Significance

Political polarization increasingly threatens democratic institutions. The belief that “my side” sees the world objectively while the “other side” sees it through the lens of its biases contributes to this political polarization and accompanying animus and distrust. This conviction, known as the “objectivity illusion,” was strong and persistent among Trump and Clinton supporters in the weeks before the 2016 presidential election. We show that the objectivity illusion predicts subsequent bias and polarization, including heightened partisanship over the presidential debates. A follow-up study showed that both groups impugned the objectivity of a putative blog author supporting the opposition candidate and saw supporters of that opposing candidate as evil.

Saturday, October 24, 2020

Trump's Strangest Lie: A Plague of Suicides Under His Watch

Gilad Edelman
wired.com
Originally published 23 Oct 2020

IN LAST NIGHT’S presidential debate, Donald Trump repeated one of his more unorthodox reelection pitches. “People are losing their jobs,” he said. “They’re committing suicide. There’s depression, alcohol, drugs at a level that nobody’s ever seen before.”

It’s strange to hear an incumbent president declare, as an argument in his own favor, that a wave of suicides is occurring under his watch. It’s even stranger given that it’s not true. While Trump has been warning since March that any pandemic lockdowns would lead to “suicides by the thousands,” several studies from abroad have found that when governments imposed such restrictions in the early waves of the pandemic, there was no corresponding increase in these deaths. In fact, suicide rates may even have declined. A preprint study released earlier this week found that the suicide rate in Massachusetts didn’t budge even as that state imposed a strong stay-at-home order in March, April, and May.

(cut)

Add this to the list of tragic ironies of the Trump era: The president is using the nonexistent link between lockdowns and suicide to justify an agenda that really could cause more people to take their own lives.

An ethical framework for global vaccine allocation

Emanuel, E. et al.
Science  11 Sep 2020:
Vol. 369, Issue 6509, pp. 1309-1312
DOI: 10.1126/science.abe2803

Once effective coronavirus disease 2019 (COVID-19) vaccines are developed, they will be scarce. This presents the question of how to distribute them fairly across countries. Vaccine allocation among countries raises complex and controversial issues involving public opinion, diplomacy, economics, public health, and other considerations. Nevertheless, many national leaders, international organizations, and vaccine producers recognize that one central factor in this decision-making is ethics. Yet little progress has been made toward delineating what constitutes fair international distribution of vaccine. Many have endorsed “equitable distribution of COVID-19…vaccine” without describing a framework or recommendations. Two substantive proposals for the international allocation of a COVID-19 vaccine have been advanced, but are seriously flawed. We offer a more ethically defensible and practical proposal for the fair distribution of COVID-19 vaccine: the Fair Priority Model.

The Fair Priority Model is primarily addressed to three groups. One is the COVAX facility—led by Gavi, the World Health Organization (WHO), and the Coalition for Epidemic Preparedness Innovations (CEPI)—which intends to purchase vaccines for fair distribution across countries. A second group is vaccine producers. Thankfully, many producers have publicly committed to a “broad and equitable” international distribution of vaccine. The last group is national governments, some of whom have also publicly committed to a fair distribution.

These groups need a clear framework for reconciling competing values, one that they and others will rightly accept as ethical and not just as an assertion of power. The Fair Priority Model specifies what a fair distribution of vaccines entails, giving content to their commitments. Moreover, acceptance of this common ethical framework will reduce duplication and waste, easing efforts at a fair distribution. That, in turn, will enhance producers' confidence that vaccines will be fairly allocated to benefit people, thereby motivating an increase in vaccine supply for international distribution.

Friday, October 23, 2020

Ethical Dimensions of Using Artificial Intelligence in Health Care

Michael J. Rigby
AMA Journal of Ethics
February 2019

An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist. Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities.

Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine, and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists. In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric disease. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.

Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field. While some efforts to engage in these ethical conversations have emerged, the medical community remains ill informed of the ethical complexities that budding AI technology can introduce. Accordingly, a rich discussion awaits that would greatly benefit from physician input, as physicians will likely be interfacing with AI in their daily practice in the near future.

Thursday, October 22, 2020

America Is Being Pulled Apart. Here's How We Can Start to Heal Our Nation

David French
Time Magazine
Originally posted 10 Sept 20

Here is an excerpt:

I’ve been writing and speaking about national polarization and division since before the Trump election. Two years ago, I began writing a book describing our challenge, outlining how we could divide and how we can heal. The prescription isn’t easy. We have to flip the script on the present political narrative. We have to prioritize accommodation.

That means revitalizing the Bill of Rights. America’s worst sins have always included denying fundamental constitutional rights to America’s most vulnerable citizens, those without electoral power. While progress has been made, doctrines like qualified immunity leave countless citizens without recourse when they face state abuse. It alienates citizens from the state and drains confidence in the American republic.

That means diminishing presidential power. A principal reason presidential politics is so toxic is that the diminishing power of states and Congress means that every four years we elect the most powerful peacetime ruler in the history of the U.S. No one person should have so much authority over an increasingly diverse and divided nation.

The increasing stakes of each presidential election increase political tension and heighten public anxiety. Americans should not see their individual liberty or the autonomy of their churches and communities as so dependent on the identity of the President.

But beyond the political changes–more local control, less centralization–Americans need a change of heart. Defending the Bill of Rights requires commitment and effort, and it requires citizens to think of others beyond their partisan tribe. Defending the Bill of Rights means that you must fight for others to have the rights that you would like to exercise yourself. The goal is simple yet elusive. Every American–regardless of race, ethnicity, sex, religion or sexual orientation–can and should have a home in this land.


Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Tuesday, October 20, 2020

What do you believe? Atheism and Religion

Kristen Weir
Monitor on Psychology
Vol. 51, No. 5, p. 52

Here is an excerpt:

Good health isn’t the only positive outcome attributed to religion. Research also suggests that religious belief is linked to prosocial behaviors such as volunteering and donating to charity.

But as with health benefits, Galen’s work suggests such prosocial benefits have more to do with general group membership than with religious belief or belonging to a specific religious group (Social Indicators Research, Vol. 122, No. 2, 2015). In fact, he says, while religious people are more likely to volunteer or give to charitable causes related to their beliefs, atheists appear to be more generous to a wider range of causes and dissimilar groups.

Nevertheless, atheists and other nonbelievers still face considerable stigma, and are often perceived as less moral than their religious counterparts. In a study across 13 countries, Gervais and colleagues found that people in most countries intuitively believed that extreme moral violations (such as murder and mutilation) were more likely to be committed by atheists than by religious believers. This anti-atheist prejudice also held true among people who identified as atheists, suggesting that religious culture exerts a powerful influence on moral judgments, even among non­believers (Nature Human Behaviour, Vol. 1, Article 0151, 2017).

Yet nonreligious people are similar to religious people in a number of ways. In the Understanding Unbelief project, Farias and colleagues found that across all six countries they studied, both believers and nonbelievers cited family and freedom as the most important values in their own lives and in the world more broadly. The team also found evidence to counter a common assumption that atheists believe life has no purpose. They found the belief that the universe is “ultimately meaningless” was a minority view among non­believers in each country.

“People assume that [non­believers] have very different sets of values and ideas about the world, but it looks like they probably don’t,” Farias says.

For the nonreligious, however, meaning may be more likely to come from within than from above. Again drawing on data from the General Social Survey, Speed and colleagues found that in the United States, atheists and the religiously unaffiliated were no more likely to believe that life is meaningless than were people who were religious or raised with a religious affiliation. 

Monday, October 19, 2020

Model-based decision making and model-free learning

Drummond, N. & Niv, Y.
Current Biology
Volume 30, Issue 15, 3 August 2020, 
Pages R860-R865

Summary

Free will is anything but free. With it comes the onus of choice: not only what to do, but which inner voice to listen to — our ‘automatic’ response system, which some consider ‘impulsive’ or ‘irrational’, or our supposedly more rational deliberative one. Rather than a devil and angel sitting on our shoulders, research suggests that we have two decision-making systems residing in the brain, in our basal ganglia. Neither system is the devil and neither is irrational. They both have our best interests at heart and aim to suggest the best course of action calculated through rational algorithms. However, the algorithms they use are qualitatively different and do not always agree on which action is optimal. The rivalry between habitual, fast action and deliberative, purposeful action is an ongoing one.