Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, February 29, 2020

Does Morality Matter? Depends On Your Definition Of Right And Wrong

Hannes Leroy
forbes.com
Originally posted 30 Jan 20

Here is an excerpt:

For our research into morality we reviewed some 300 studies on moral leadership. We discovered that morality is – generally speaking – a good thing for leadership effectiveness but it is also a double-edged sword about which you need to be careful and smart. 

To do this, there are three basic approaches.

First, followers can be inspired by a leader who advocates the highest common good for all and is motivated to contribute to that common good from an expectation of reciprocity (servant leadership; consequentialism).

Second, followers can also be inspired by a leader who advocates the adherence to a set of standards or rules and is motivated to contribute to the clarity and safety this structure imposes for an orderly society (ethical leadership; deontology).

Third and finally, followers can also be inspired by a leader who advocates for moral freedom and corresponding responsibility and is motivated to contribute to this system in the knowledge that others will afford them their own moral autonomy (authentic leadership; virtue ethics).

The info is here.

Friday, February 28, 2020

Lon Fuller and the Moral Value of the Rule of Law

Murphy, Colleen
Law and Philosophy,
Vol. 24, 2005.
Available at SSRN

It is often argued that the rule of law is only instrumentally morally valuable, valuable when and to the extent that a legal system is used to purse morally valuable ends. In this paper, I defend Lon Fuller’s view that the rule of law has conditional non-instrumental as well as instrumental moral value. I argue, along Fullerian lines, that the rule of law is conditionally non-instrumentally valuable in virtue of the way a legal system structures political relationships. The rule of law specifies a set of requirements which lawmakers must respect if they are to govern legally. As such, the rule of law restricts the illegal or extra-legal use of power. When a society rules by law, there are clear rules articulating the behavior appropriate for citizens and officials. Such rules ideally determine the particular contours political relationships will take. When the requirements of the rule of law are respected, the political relationships structured by the legal system constitutively express the moral values of reciprocity and respect for autonomy. The rule of law is instrumentally valuable, I argue, because in practice the rule of law limits the kind of injustice which governments pursue. There is in practice a deeper connection between ruling by law and the pursuit of moral ends than advocates
of the standard view recognize.

The next part of this paper outlines Lon Fuller’s conception of the rule of law and his explanation of its moral value. The third section illustrates how the Fullerian analysis draws attention to the impact that state-sanctioned atrocities can have upon the institutional functioning of the legal system, and so to their impact on the relationships between officials and citizens that are structured by that institution. The fourth section considers two objections to this account. According to the first, Razian objection, while the Fullerian analysis accurately describes the nature of the requirements of the rule of law, it offers a mistaken account of its moral value. Against my assertion that the rule of law has non-instrumental value, this objection argues that the rule of law is only instrumentally valuable. The second objection grants that the rule of law has non-instrumental moral value but claims that the Fullerian account of the requirements of the rule of law is incomplete.

Slow response times undermine trust in algorithmic (but not human) predictions

E Efendic, P van de Calseyde, & A Evans
PsyArXiv PrePrints
Lasted Edited 22 Jan 20

Abstract

Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.

General discussion 

When are people reluctant to trust algorithm-generated advice? Here, we demonstrate that it depends on the algorithm’s response time. People judged slowly (vs. quickly) generated predictions by algorithms as being of lower quality. Further, people were less willing to use slowly generated algorithmic predictions. For human predictions, we found the opposite: people judged slow human-generated predictions as being of higher quality. Similarly, they were more likely to use slowly generated human predictions. 

We find that the asymmetric effects of response time can be explained by different expectations of task difficulty for humans vs. algorithms. For humans, slower responses were congruent with expectations; the prediction task was presumably difficult so slower responses, and more effort, led people to conclude that the predictions were high quality. For algorithms, slower responses were incongruent with expectations; the prediction task was presumably easy so slower speeds, and more effort, were unrelated to prediction quality. 

The research is here.

Thursday, February 27, 2020

Liar, Liar, Liar

S. Vedantam, M. Penmann, & T. Boyle
Hidden Brain - NPR.org
Originally posted 17 Feb 20

When we think about dishonesty, we mostly think about the big stuff.

We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.

But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.

"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.

These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?

That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.

The info is here.

There is a 30 minute audio file worth listening.

The cultural evolution of prosocial religions

Norenzayan, A., and others.
(2016). Behavioral and Brain Sciences, 39, E1.
doi:10.1017/S0140525X14001356

Abstract

We develop a cultural evolutionary theory of the origins of prosocial religions and apply it to resolve two puzzles in human psychology and cultural history: (1) the rise of large-scale cooperation among strangers and, simultaneously, (2) the spread of prosocial religions in the last 10–12 millennia. We argue that these two developments were importantly linked and mutually energizing. We explain how a package of culturally evolved religious beliefs and practices characterized by increasingly potent, moralizing, supernatural agents, credible displays of faith, and other psychologically active elements conducive to social solidarity promoted high fertility rates and large-scale cooperation with co-religionists, often contributing to success in intergroup competition and conflict. In turn, prosocial religious beliefs and practices spread and aggregated as these successful groups expanded, or were copied by less successful groups. This synthesis is grounded in the idea that although religious beliefs and practices originally arose as nonadaptive by-products of innate cognitive functions, particular cultural variants were then selected for their prosocial effects in a long-term, cultural evolutionary process. This framework (1) reconciles key aspects of the adaptationist and by-product approaches to the origins of religion, (2) explains a variety of empirical observations that have not received adequate attention, and (3) generates novel predictions. Converging lines of evidence drawn from diverse disciplines provide empirical support while at the same time encouraging new research directions and opening up new questions for exploration and debate.

The paper is here.

Wednesday, February 26, 2020

Zombie Ethics: Don’t Keep These Wrong Ideas About Ethical Leadership Alive

Bruce Weinstein
Forbes.com
Originally poste 18 Feb 20

Here is an excerpt:

Zombie Myth #1: There are no right and wrong answers in ethics

A simple thought experiment should permanently dispel this myth. Think about a time when you were disciplined or punished for something you firmly believed was unfair. Perhaps you were accused at work of doing something you didn’t do. Your supervisor Mike warned you not to do it again, even though you had plenty of evidence that you were innocent. Even Mike didn’t fire you, your good reputation has been sullied for no good reason.

Suppose you tell your colleague Janice this story, and she responds, “Well, to you Mike’s response was unfair, but from Mike’s point of view, it was absolutely fair.” What would you say to Janice?

A. “You’re right. There are no right or wrong answers in ethics.”

B. “No, Janice. Mike didn’t have a different point of view. He had a mistaken point of view. There are facts at hand, and Mike refused to consider them.”

Perhaps you believed myth #1 before this incident occurred. Now that you’ve been on the receiving end of a true injustice, you see this myth for what it really is: a zombie idea that needs to go to its grave permanently.

Zombie myth #2: Ethics varies from culture to culture and place to place 

It’s tempting to treat this myth as true. For example, bribery is a widely accepted way to do business in many countries. At a speech I gave to commercial pilots, an audience member said that the high-level executives on a recent flight weren’t allowed disembark until someone “took care of” a customs official. Either they could give him some money under the table and gain entry into the country, or they could leave.

But just because a practice is widely accepted doesn’t mean it is acceptable. That’s why smart businesses prohibit engaging in unfair international business practices, even if it means losing clients.

The info is here.

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Tuesday, February 25, 2020

Autonomy, mastery, respect, and fulfillment are key to avoiding moral injury in physicians

Simon G Talbot and Wendy Dean
BMJ blogs
Originally posted 16 Jan 20

Here is an excerpt:

We believe that distress is a clinician’s response to multiple competing allegiances—when they are forced to make a choice that transgresses a long standing, deeply held commitment to healing. Doctors today are caught in a double bind between making patients’ needs the top priority (thereby upholding our Hippocratic Oath) and giving precedence to the business and financial frameworks of the healthcare system (insurance, hospital, and health system mandates).

Since our initial publication, we have come to believe that burnout is the end stage of moral injury, when clinicians are physically and emotionally exhausted with battling a broken system in their efforts to provide good care; when they feel ineffective because too often they have met with immovable barriers to good care; and when they depersonalize patients because emotional investment is intolerable when patient suffering is inevitable as a result of system dysfunction. Reconfiguring the healthcare system to focus on healing patients, rebuilding a sense of community and respect among doctors, and demonstrating the alignment of doctors’ goals with those of our patients may be the best way to address the crisis of distress and, potentially, find a way to prevent burnout. But how do we focus the restructuring this involves?

“Moral injury” has been widely adopted by doctors as a description for their distress, as evidenced by its use on social media and in non-academic publications. But what is at the heart of it? We believe that moral injury occurs when the basic elements of the medical profession are eroded. These are autonomy, mastery, respect, and fulfillment, which are all focused around the central principle of purpose.

The info is here.

The Morality of Taking From the Rich and Giving to the Poor

Noah Smith
Bloomberg
Originally posted 11 Feb 20

Here is an excerpt:

Instead, economists can help by trying to translate people’s preferences for fairness, equality and other moral goals into actionable policy. This requires getting a handle on what amount and types of redistribution people actually want. Some researchers now are attempting to do this.

For example, in a new paper, economists Alain Cohn, Lasse Jessen, Marko Klasnja and Paul Smeets, reasoning that richer people have an outsized impact on the political process, use an online survey to measure how wealthy individuals think about redistribution. Their findings were not particularly surprising; people in the top 5% of the income and wealth distributions supported lower taxes and tended to vote Republican.

The authors also performed an online experiment in which some people were allowed to choose to redistribute winnings among other experimental subjects who completed an online task. No matter whether the winnings were awarded based on merit or luck, rich subjects chose less redistribution.

But not all rich subjects. Cohn and his co-authors found that people who grew up wealthy favored redistribution about as much as average Americans. But those with self-made fortunes favored more inequality. Apparently, many people who make it big out of poverty or the middle class believe that everyone should do the same.

This suggests that the U.S. has a dilemma. A dynamic economy creates lots of new companies, which bring great fortunes to the founders. But if Cohn and his co-authors are right, those founders are likely to support less redistribution as a result. So if the self-made entrepreneurs wield political power, as the authors believe, there could be a political trade-off between economic dynamism and redistribution.

The info is here.