Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, October 18, 2017

When Doing Some Good Is Evaluated as Worse Than Doing No Good at All

George E. Newman and Daylian M. Cain
Psychological Science published online 8 January 2014

Abstract

In four experiments, we found that the presence of self-interest in the charitable domain was seen as tainting: People evaluated efforts that realized both charitable and personal benefits as worse than analogous behaviors that produced no charitable benefit. This tainted-altruism effect was observed in a variety of contexts and extended to both moral evaluations of other agents and participants’ own behavioral intentions (e.g., reported willingness to hire someone or purchase a company’s products). This effect did not seem to be driven by expectations that profits would be realized at the direct cost of charitable benefits, or the explicit use of charity as a means to an end. Rather, we found that it was related to the accessibility of different counterfactuals: When someone was charitable for self-interested reasons, people considered his or her behavior in the absence of self-interest, ultimately concluding that the person did not behave as altruistically as he or she could have. However, when someone was only selfish, people did not spontaneously consider whether the person could have been more altruistic.

The article is here.

Danny Kahneman on AI versus Humans


NBER Economics of AI Workshop 2017

Here is a rough translation of an excerpt:

One point made yesterday was the uniqueness of humans when it comes to evaluations. It was called “judgment”. Here in my noggin it’s “evaluation of outcomes”: the utility side of the decision function. I really don’t see why that should be reserved to humans.

I’d like to make the following argument:
  1. The main characteristic of people is that they’re very “noisy”.
  2. You show them the same stimulus twice, they don’t give you the same response twice.
  3. You show the same choice twice I mean—that’s why we had stochastic choice theory because thereis so much variability in people’s choices given the same stimuli.
  4. Now what can be done even without AI is a program that observes an individual that will be better than the individual and will make better choices for the individual by because it will be noise-free.
  5. We know from the literature that Colin cited on predictions an interesting tidbit:
  6. If you take clinicians and you have them predict some criterion a large number of time and then you develop a simple equation that predicts not the outcome but the clinicians judgment, that model does better in predicting the outcome then the clinician.
  7. That is fundamental.
This is telling you that one of the major limitations on human performance is not bias it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation: the first thing that comes to mind. Well, there is bias. And it is an error. But in fact most of the errors that people make are better viewed as this random noise. And there’s an awful lot of it.

The entire transcript and target article is here.

Tuesday, October 17, 2017

Work and the Loneliness Epidemic

Vivek Murphy
Harvard Business Review

Here is an excerpt:

During my years caring for patients, the most common pathology I saw was not heart disease or diabetes; it was loneliness. The elderly man who came to our hospital every few weeks seeking relief from chronic pain was also looking for human connection: He was lonely. The middle-aged woman battling advanced HIV who had no one to call to inform that she was sick: She was lonely too. I found that loneliness was often in the background of clinical illness, contributing to disease and making it harder for patients to cope and heal.

This may not surprise you. Chances are, you or someone you know has been struggling with loneliness. And that can be a serious problem. Loneliness and weak social connections are associated with a reduction in lifespan similar to that caused by smoking 15 cigarettes a day and even greater than that associated with obesity. But we haven’t focused nearly as much effort on strengthening connections between people as we have on curbing tobacco use or obesity. Loneliness is also associated with a greater risk of cardiovascular disease, dementia, depression, and anxiety. At work, loneliness reduces task performance, limits creativity, and impairs other aspects of executive function such as reasoning and decision making. For our health and our work, it is imperative that we address the loneliness epidemic quickly.

Once we understand the profound human and economic costs of loneliness, we must determine whose responsibility it is to address the problem.

The article is here.

Is it Ethical for Scientists to Create Nonhuman Primates with Brain Disorders?

Carolyn P. Neuhaus
The Hastings Center
Originally published on September 25, 2017

Here is an excerpt:

Such is the rationale for creating primate models: the brain disorders under investigation cannot be accurately modelled in other nonhuman organisms, because of differences in genetics, brain structure, and behaviors. But research involving humans with brain disorders is also morally fraught. Some people with brain disorders experience impairments to decision-making capacity as a component or symptom of disease, and therefore are unable to provide truly informed consent to research participation. Some of the research is too invasive, and would be grossly unethical to carry out with human subjects. So, nonhuman primates, and macaques in particular, occupy a “sweet spot.” Their genetic code and brain structure are sufficiently similar to humans’ so as to provide a valid and accurate model of human brain disorders. But, they are not conferred protections from research that apply to humans and to some non-human primates, notably chimpanzees and great apes. In the United States, for example, chimpanzees are protected from invasive research, but other primates are not. Some have suggested, including in a recent article in Journal of Medical Ethics, that protections like those afforded to chimpanzees ought to be extended to other primates and other animals, such as dogs, as evidence mounts that they also have complex cognitive, social, and emotional lives. For now, macaques and other primates remain in use.

Prior to the discovery of genome editing tools like ZFNs, TALENs, and most recently, CRISPR, it was extremely challenging, almost to the point of prohibitive, to create non-human primates with precise, heritable genome modifications. But CRISPR (Clustered Randomized Interspersed Palindromic Repeat) presents a technological advance that brings genome engineering of non-human primates well within reach.

The article is here.

Monday, October 16, 2017

Can we teach robots ethics?

Dave Edmonds
BBC.com
Originally published October 15, 2017

Here is an excerpt:

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code - a way of scrutinising what's happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

The article is here.

No Child Left Alone: Moral Judgments about Parents Affect Estimates of Risk to Children

Thomas, A. J., Stanford, P. K., & Sarnecka, B. W. (2016).
Collabra, 2(1), 10.

Abstract

In recent decades, Americans have adopted a parenting norm in which every child is expected to be under constant direct adult supervision. Parents who violate this norm by allowing their children to be alone, even for short periods of time, often face harsh criticism and even legal action. This is true despite the fact that children are much more likely to be hurt, for example, in car accidents. Why then do bystanders call 911 when they see children playing in parks, but not when they see children riding in cars? Here, we present results from six studies indicating that moral judgments play a role: The less morally acceptable a parent’s reason for leaving a child alone, the more danger people think the child is in. This suggests that people’s estimates of danger to unsupervised children are affected by an intuition that parents who leave their children alone have done something morally wrong.

Here is part of the discussion:

The most important conclusion we draw from this set of experiments is the following: People don’t only think that leaving children alone is dangerous and therefore immoral. They also think it is immoral and therefore dangerous. That is, people overestimate the actual danger to children who are left alone by their parents, in order to better support or justify their moral condemnation of parents who do so.

This brings us back to our opening question: How can we explain the recent hysteria about unsupervised children, often wildly out of proportion to the actual risks posed by the situation? Our findings suggest that once a moralized norm of ‘No child left alone’ was generated, people began to feel morally outraged by parents who violated that norm. The need (or opportunity) to better support or justify this outrage then elevated people’s estimates of the actual dangers faced by children. These elevated risk estimates, in turn, may have led to even stronger moral condemnation of parents and so on, in a self-reinforcing feedback loop.

The article is here.

Sunday, October 15, 2017

Official sends memo to agency leaders about ethical conduct

Avery Anapol
The Hill
Originally published October 10, 2017

The head of the Office of Government Ethics is calling on the leaders of government agencies to promote an “ethical culture.”

David Apol, acting director of the ethics office, sent a memo to agency heads titled, “The Role of Agency Leaders in Promoting an Ethical Culture.” The letter was sent to more than 100 agency heads, CNN reported.

“It is essential to the success of our republic that citizens can trust that your decisions and the decisions made by your agency are motivated by the public good and not by personal interests,” the memo reads.

Several government officials are under investigation for their use of chartered planes for government business.

One Cabinet official, former Health secretary Tom Price, resigned over his use of private jets. Treasury Secretary Steven Mnuchin is also under scrutiny for his travels.

“I am deeply concerned that the actions of some in Government leadership have harmed perceptions about the importance of ethics and what conduct is, and is not, permissible,” Apol wrote.

The memo includes seven suggested actions that Apol says leaders should take to strengthen the ethical culture in their agencies. The suggestions include putting ethics officials in senior leadership meetings, and “modeling a ‘Should I do it?’ mentality versus a ‘Can I do it?’ mentality.”

The article is here.

Saturday, October 14, 2017

Who Sees What as Fair? Mapping Individual Differences in Valuation of Reciprocity, Charity,and Impartiality

Laura Niemi and Liane Young
Social Justice Research

When scarce resources are allocated, different criteria may be considered: impersonal allocation (impartiality), the needs of specific individuals (charity), or the relational ties between individuals (reciprocity). In the present research, we investigated how people’s perspectives on fairness relate to individual differences in interpersonal orientations. Participants evaluated the fairness of allocations based on (a) impartiality (b) charity, and (c) reciprocity. To assess interpersonal orientations, we administered measures of dispositional empathy (i.e., empathic concern and perspective-taking) and Machiavellianism. Across two studies, Machiavellianism correlated with higher ratings of reciprocity as fair, whereas empathic concern and perspective taking correlated with higher ratings of charity as fair. We discuss these findings in relation to recent neuroscientific research on empathy, fairness, and moral evaluations of resource allocations.

The article is here.

Friday, October 13, 2017

Moral Distress: A Call to Action

The Editor
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 533-536.

During medical school, I was exposed for the first time to ethical considerations that stemmed from my new role in the direct provision of patient care. Ethical obligations were now both personal and professional, and I had to navigate conflicts between my own values and those of patients, their families, and other members of the health care team. However, I felt paralyzed by factors such as my relative lack of medical experience, low position in the hospital hierarchy, and concerns about evaluation. I experienced a profound and new feeling of futility and exhaustion, one that my peers also often described.

I have since realized that this experience was likely “moral distress,” a phenomenon originally described by Andrew Jameton in 1984. For this issue, the following definition, adapted from Jameton, will be used: moral distress occurs when a clinician makes a moral judgment about a case in which he or she is involved and an external constraint makes it difficult or impossible to act on that judgment, resulting in “painful feelings and/or psychological disequilibrium”. Moral distress has subsequently been shown to be associated with burnout, which includes poor coping mechanisms such as moral disengagement, blunting, denial, and interpersonal conflict.

Moral distress as originally conceived by Jameton pertained to nurses and has been extensively studied in the nursing literature. However, until a few years ago, the literature has been silent on the moral distress of medical students and physicians.

The article is here.