Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, April 23, 2020

We Tend To See Acts We Disapprove Of As Deliberate

Jesse Singal
BPS
Research Digest
Originally published 14 April 20

One of the most important and durable findings in moral and political psychology is that there is a tail-wags-the-dog aspect to human morality. Most of us like to think we have carefully thought-through, coherent moral systems that guide our behaviour and judgments. In reality our behaviour and judgments often stem from gut-level impulses, and only after the fact do we build elaborate moral rationales to justify what we believe and do.

A new paper in the Journal of Personality and Social Psychology examines this issue through a fascinating lens: free will. Or, more specifically, via people’s judgments about how much free will others had when committing various transgressions. The team, led by Jim A. C. Everett of the University of Kent and Cory J. Clark of Durham University, ran 14 studies geared at evaluating the possibility that at least some of the time the moral tail wags the dog: first people decide whether someone is blameworthy, and then judge how much free will they have, in a way that allows them to justify blaming those they want to blame and excusing those they want to excuse.

The researchers examined this hypothesis, for which there is already some evidence, through the lens of American partisan politics. In the paper they note that previous research has shown that conservatives have a greater belief in free will than liberals, and are more moralising in general (that is, they categorise a larger number of acts as morally problematic, and rely on a greater number of principles — or moral foundations — in making these judgments). The first two of the new studies replicated these findings — this is consistent with the idea, put simply, that conservatives believe in free will more because it allows them to level more moral judgments.

The info is here.

Universalization Reasoning Guides Moral Judgment

Levine, S., Kleiman-Weiner, M., and others
(2020, February 23).
https://doi.org/10.31234/osf.io/p7e6h

Abstract

To explain why an action is wrong, we sometimes say: “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and that children show a comparable pattern of judgment as early as 4 years old. We conclude that alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.

From the Discussion:

Across five studies, we show that both adults and children sometimes make moral judgments well described by the logic of universalization,  and not by standard outcome, rule or norm-based models of moral judgment.We model participants’ judgment of the moral accept-ability  of  an  action  as  proportional  to  the  change  in expected utility in the hypothetical world where all interested parties feel free to do the action.  This model accounts for the ways in which moral judgment is sensitive to the number of parties hypothetically interested in an action, the threshold at which harmful outcomes occur, and their interaction.  By incorporating data on participants’ subjectively perceived utility functions we can predict their moral judgments of threshold problems with quantitative precision, further validating our pro-posed computational model.

The research is here.

Wednesday, April 22, 2020

Your Code of Conduct May Be Sending the Wrong Message

F. Gino, M, Kouchaki, & Y. Feldman
Harvard Business Review
Originally posted 13 March 20


Here is an excerpt:

We examined the relationship between the language used (personal or impersonal) in these codes and corporate illegality. Research assistants blind to our research questions and hypotheses coded each document based on the degree to which it used “we” or “member/employee” language. Next, we searched media sources for any type of illegal acts these firms may have been involved in, such as environmental violations, anticompetitive actions, false claims, and fraudulent actions. Our analysis showed that firms that used personal language in their codes of conduct were more likely to be found guilty of illegal behaviors.

We found this initial evidence to be compelling enough to dig further into the link between personal “we” language and unethical behavior. What would explain such a link? We reasoned that when language communicating ethical standards is personal, employees tend to assume they are part of a community where members are easygoing, helpful, cooperative, and forgiving. By contrast, when the language is impersonal — for example, “organizational members are expected to put customers first” — employees feel they are part of a transactional relationship in which members are more formal and distant.

Here’s the problem: When we view our organization as tolerant and forgiving, we believe we’re less likely to be punished for misconduct. Across nine different studies, using data from lab- and field-based experiments as well as a large dataset of S&P firms, we find that personal language (“we,” “us”) leads to less ethical behavior than impersonal language (“employees,” “members”) does, apparently because people encountering more personal language believe their organization is less serious about punishing wrongdoing.

The info is here.

Ethics deserve a starring role in business dealings

Barbara Lang
bizjournals.com
Originally published 5 March 20

Ethics deserve a starring role in business dealings

They created cultures of fear, deception and arrogance, and they put their own personal interests in front of all others, including their own families. They didn’t care whose lives they destroyed, using their power to conquer and destroy anyone blocking their path to money and gratification. Shockingly, they manipulated those around them — people with whom they built trust — to foster networks of secrecy and allegiance beyond anything we have seen in the history of American business. Ironically, they crucified themselves through historic cheating, lying and a breakdown of ethics never seen before.

Many are household names, and we should all cringe when we hear them, even as they are reduced to insignificance and confined to moldy jail cells. Ken Lay, CEO and chairman of Enron, was the mastermind of a historic accounting scandal at the energy company, resulting in its bankruptcy. He was found guilty of 10 counts of securities fraud before he died in 2006. There are also the two infamous Bernies: Ebbers and Madoff. Ebbers, the former WorldCom CEO, was convicted of securities fraud and conspiracy as part of that company’s false financial reporting scandal. Maybe the most egregious and sinister of them all was Madoff, whose Ponzi scheme defrauded innocent investors of millions of dollars and life savings. He rots in federal prison while his clients try to make sense of the destruction he knowingly caused.

The info is here.

Tuesday, April 21, 2020

When Google and Apple get privacy right, is there still something wrong?

Tamar Sharon
Medium.com
Originally posted 15 April 20

Here is an excerpt:

As the understanding that we are in this for the long run settles in, the world is increasingly turning its attention to technological solutions to address the devastating COVID-19 virus. Contact-tracing apps in particular seem to hold much promise. Using Bluetooth technology to communicate between users’ smartphones, these apps could map contacts between infected individuals and alert people who have been in proximity to an infected person. Some countries, including China, Singapore, South Korea and Israel, have deployed these early on. Health authorities in the UK, France, Germany, the Netherlands, Iceland, the US and other countries, are currently considering implementing such apps as a means of easing lock-down measures.

There are some bottlenecks. Do they work? The effectiveness of these applications has not been evaluated — in isolation or as part of an integrated strategy. How many people would need to use them? Not everyone has a smartphone. Even in rich countries, the most vulnerable group, aged over 80, is least likely to have one. Then there’s the question about fundamental rights and liberties, first and foremost privacy and data protection. Will contact-tracing become part of a permanent surveillance structure in the prolonged “state of exception” we are sleep-walking into?

Prompted by public discussions about this last concern, a number of European governments have indicated the need to develop such apps in a way that would be privacy preserving, while independent efforts involving technologists and scientists to deliver privacy-centric solutions have been cropping up. The Pan-European Privacy-Preserving Tracing Initiative (PEPP-IT), and in particular the Decentralised Privacy-Preserving Proximity Tracing (DP-3T) protocol, which provides an outline for a decentralised system, are notable forerunners. Somewhat late in the game, the European Commission last week issued a Recommendation for a pan-European approach to the adoption of contact-tracing apps that would respect fundamental rights such as privacy and data protection.

The info is here.

Piercing the Smoke Screen: Dualism, Free Will, and Christianity

S. Murray, E. Murray, & T. Nadelhoffer
PsyArXiv Preprints
Originally created on 18 Feb 20

Abstract

Research on the folk psychology of free will suggests that people believe free will is incompatible with determinism and that human decision-making cannot be exhaustively characterized by physical processes. Some suggest that certain elements of Western cultural history, especially Christianity, have helped to entrench these beliefs in the folk conceptual economy. Thus, on the basis of this explanation, one should expect to find three things: (1) a significant correlation between belief in dualism and belief in free will, (2) that people with predominantly incompatibilist commitments are likely to exhibit stronger dualist beliefs than people with predominantly compatibilist commitments, and (3) people who self-identify as Christians are more likely to be dualists and incompatibilists than people who do not self-identify as Christians. We present the results of two studies (n = 378) that challenge two of these expectations. While we do find a significant correlation between belief in dualism and belief in free will, we found no significant difference in dualist tendencies between compatibilists and incompatibilists. Moreover, we found that self-identifying as Christian did not significantly predict preference for a particular metaphysical conception of free will. This calls into question assumptions about the relationship between beliefs about free will, dualism, and Christianity.

The research is here.

Monday, April 20, 2020

How Becoming a Doctor Made Me a Worse Listener

Adeline Goss
JAMA. 2020;323(11):1041-1042.
doi:10.1001/jama.2020.2051

Here is an excerpt:

And I hadn’t noticed. Maybe that was because I was still connecting to patients. I still choked up when they cried, felt joy when they rejoiced, felt moved by and grateful for my work, and generally felt good about the care I was providing.

But as I moved through my next days in clinic, I began to notice the unconscious tricks I had developed to maintain a connection under time pressure. A whole set of expressions played out across my face during history taking—nonverbal concern, nonverbal gentleness, nonverbal apology—a time-efficient method of conveying empathy even when I was asking directed questions, controlling the type and volume of information I received, and, at times, interrupting. Sometimes I apologized to patients for my style of interviewing, explaining that I wanted to make sure I understood things clearly so that I could treat them. I apologized because I didn’t like communicating this way. I can’t imagine it felt good to them.

What’s strange is that, at the end of these visits, patients often thanked me for my concern and detail-orientedness. They may have interpreted my questioning as a sign that I was interested. But was I?

Interest is a multilayered concept in medicine. I care about patients, and I am interested in their stories in the sense that they contain the information I need to make the best possible decisions for their care. Interest motivates doctors to take a detailed history, review the chart, and analyze the literature. Interest leads to the correct diagnosis and treatment. Residency rewards this kind of interest. Perhaps as a result, looking around at my co-residents, it’s in abundant supply, even when time is tight.

The info is here.

Europe plans to strictly regulate high-risk AI technology

Nicholas Wallace
sciencemag.org
Originally published 19 Feb 20

Here is an excerpt:

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The info is here.

Sunday, April 19, 2020

On the ethics of algorithmic decision-making in healthcare

Grote T, Berens P
Journal of Medical Ethics 
2020;46:205-211.

Abstract

In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

From the Conclusion

In this paper, we aimed at examining which opportunities and pitfalls machine learning potentially provides to enhance of medical decision-making on epistemic and ethical grounds. As should have become clear, enhancing medical decision-making by deferring to machine learning algorithms requires trade-offs at different levels. Clinicians, or their respective healthcare institutions, are facing a dilemma: while there is plenty of evidence of machine learning algorithms outsmarting their human counterparts, their deployment comes at the costs of high degrees of uncertainty. On epistemic grounds, relevant uncertainty promotes risk-averse decision-making among clinicians, which then might lead to impoverished medical diagnosis. From an ethical perspective, deferring to machine learning algorithms blurs the attribution of accountability and imposes health risks to patients. Furthermore, the deployment of machine learning might also foster a shift of norms within healthcare. It needs to be pointed out, however, that none of the issues we discussed presents a knockout argument against deploying machine learning in medicine, and our article is not intended this way at all. On the contrary, we are convinced that machine learning provides plenty of opportunities to enhance decision-making in medicine.

The article is here.