Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, December 17, 2018

How Wilbur Ross Lost Millions, Despite Flouting Ethics Rules

Dan Alexander
Forbes.com
Originally published December 14, 2018

Here is an excerpt:

By October 2017, Ross was out of time to divest. In his ethics agreement, he said he would get rid of the funds in the first 180 days after his confirmation—or if not, during a 60-day extension period. So on October 25, exactly 240 days after his confirmation, Ross sold part of his interests to funds managed by Goldman Sachs. Given that he waited until the last possible day to legally divest the assets, it seems certain that he ended up selling at a discount.

The very next day, on October 26, 2017, a reporter for the New York Times contacted Ross with a list of questions about his ties to Navigator, the Putin-linked company. Before the story was published, Ross took out a short position against Navigator—essentially betting that the company’s stock would go down. When the story finally came out, on November 5, 2017, the stock did not plummet initially, but it did creep down 4% by the time Ross closed the short position 11 days later, apparently bolstering his fortune by $3,000 to $10,000.

On November 1, 2017, the day after Ross shorted Navigator, he signed a sworn statement that he had divested everything he previously told federal ethics officials he would. But that was not true. In fact, Ross still owned more than $10 million worth of stock in Invesco, the parent company of his former private equity firm. The next month, he sold those shares, pocketing at least $1.2 million more than he would have if he sold when he first promised to.

Am I a Hypocrite? A Philosophical Self-Assessment

John Danaher
Philosophical Disquisitions
Originally published November 9, 2018

Here are two excerpts:

The common view among philosophers is that hypocrisy is a moral failing. Indeed, it is often viewed as one of the worst moral failings. Why is this? Christine McKinnon’s article ‘Hypocrisy, with a Note on Integrity’ provides a good, clear defence of this view. The article itself is a classic exercise in analytical philosophical psychology. It tries to clarify the structure of hypocrisy and explain why we should take it so seriously. It does so by arguing that there are certain behaviours, desires and dispositions that are the hallmark of the hypocrite and that these behaviours, desires and dispositions undermine our system of social norms.

McKinnon makes this case by considering some paradigmatic instances of hypocrisy, and identifying the necessary and sufficient conditions that allow us to label these as instances of hypocrisy. My opening example of my email behaviour probably fits this paradigmatic mode — despite my protestations to the contrary. A better example, however, might be religious hypocrisy. There have been many well-documented historical cases of this, but let’s not focus on these. Let’s instead imagine a case that closely parallels these historical examples. Suppose there is a devout fundamentalist Christian preacher. He regularly preaches about the evils of homosexuality and secularism and professes to be heterosexual and devout. He calls upon parents to disown their homosexual children or to subject them to ‘conversion therapy’. Then, one day, this preacher is discovered to himself be a homosexual. Not just that, it turns out he has a long-term male partner that he has kept hidden from the public for over 20 years, and that they were recently married in a non-religious humanist ceremony.

(cut)

In other words, what I refer to as my own hypocrisy seems to involve a good deal of self-deception and self-manipulation, not (just) the manipulation of others. That’s why I was relieved to read Michael Statman’s article on ‘Hypocrisy and Self-Deception’. Statman wants to get away from the idea of the hypocrite as moral cartoon character. Real people are way more interesting than that. As he sees it, the morally vicious form of hypocrisy that is the focus of McKinnon’s ire tends to overlap with and blur into self-deception much more frequently than she allows. The two things are not strongly dichotomous. Indeed, people can slide back and forth between them with relative ease: the self-deceived can slide into hypocrisy and the hypocrite can slide into self-deception.

Although I am attracted to this view, Statman points out that it is a tough sell. 

Sunday, December 16, 2018

Institutional Conflicts of Interest and Public Trust

Francisco G. Cigarroa, Bettie Sue Masters, Dan Sharphorn
JAMA. Published online November 14, 2018.
doi:10.1001/jama.2018.18482

Here is an excerpt:

It is no longer enough for institutions conducting research to only have conflict of interest policies for individual researchers, they also must directly address the growing concern about institutional conflicts of interest. Every research institution and university deserving of the public’s trust needs to have well-defined institutional conflict of interest policies. A process must be established that will ensure research is untainted by any personal financial interests of the researcher, and that no financial interests exist for the institution or the institution’s key decision makers that could cloud otherwise open and honest decisions regarding the institution’s research mission.

Education and culture are fundamental to the successful implementation of any policy. It is incumbent upon institutional decision makers and all employees involved in research to be knowledgeable about individual and institutional conflict of interest policies. It may not always be obvious to researchers that they have a perceived or real conflict of interest or bias. Therefore, it is important to establish a culture of transparency and disclosure of any outside interests that could potentially influence research and include individuals at the highest level of the institution. Policies should be clear and easy to implement and permit pathways to provide disclosure with adequate explanation, as well as information regarding how potential or real conflicts of interest are managed or eliminated. This will require the establishment of interactive databases aimed at mitigating, to the extent possible, both individual and institutional conflicts of interest.

Policies alone are not sufficient to protect an institution from conflicts of interest. Institutional compliance toward these policies and dedication toward establishing processes by which to identify, resolve, or eliminate institutional conflicts of interest are necessary. Institutions and their respective boards of trustees should be prepared to address sensitive situations when a supervisor, executive leader, or trustee is identified as contributing to an institutional conflict of interest and be prepared to direct specific actions to resolve such conflict. In this regard, it would be prudent for governance to establish an institutional conflicts of interest committee with sufficient authority to manage or eliminate perceived or real conflicts of interest affecting the institution.

Saturday, December 15, 2018

What is ‘moral distress’? A narrative synthesis of the literature

Georgina Morley, Jonathan Ives, Caroline Bradbury-Jones, & Fiona Irvine
Nursing Ethics
First Published October 8, 2017 Review Article  

Introduction

The concept of moral distress (MD) was introduced to nursing by Jameton who defined MD as arising, ‘when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action’. MD has subsequently gained increasing attention in nursing research, the majority of which conducted in North America but now emerging in South America, Europe, the Middle East and Asia. Studies have highlighted the deleterious effects of MD, with correlations between higher levels of MD, negative perceptions of ethical climate and increased levels of compassion fatigue among nurses. Consensus is that MD can negatively impact patient care, causing nurses to avoid certain clinical situations and ultimately leave the profession. MD is therefore a significant problem within nursing, requiring investigation, understanding, clarification and responses. The growing body of MD research, however, is arguably failing to bring the required clarification but rather has complicated attempts to study it. The increasing number of cited causes and effects of MD means the term has expanded to the point that according to Hanna and McCarthy and Deady, it is becoming an ‘umbrella term’ that lacks conceptual clarity referring unhelpfully to a wide range of phenomena and causes. Without, however, a coherent and consistent conceptual understanding, empirical studies of MD’s prevalence, effects, and possible responses are likely to be confused and contradictory.

A useful starting point is a systematic exploration of existing literature to critically examine definitions and understandings currently available, interrogating their similarities, differences, conceptual strengths and weaknesses. This article presents a narrative synthesis that explored proposed necessary and sufficient conditions for MD, and in doing so, this article also identifies areas of conceptual tension and agreement.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Why Health Professionals Should Speak Out Against False Beliefs on the Internet

Joel T. Wu and Jennifer B. McCormick
AMA J Ethics. 2018;20(11):E1052-1058.
doi: 10.1001/amajethics.2018.1052.

Abstract

Broad dissemination and consumption of false or misleading health information, amplified by the internet, poses risks to public health and problems for both the health care enterprise and the government. In this article, we review government power for, and constitutional limits on, regulating health-related speech, particularly on the internet. We suggest that government regulation can only partially address false or misleading health information dissemination. Drawing on the American Medical Association’s Code of Medical Ethics, we argue that health care professionals have responsibilities to convey truthful information to patients, peers, and communities. Finally, we suggest that all health care professionals have essential roles in helping patients and fellow citizens obtain reliable, evidence-based health information.

Here is an excerpt:

We would suggest that health care professionals have an ethical obligation to correct false or misleading health information, share truthful health information, and direct people to reliable sources of health information within their communities and spheres of influence. After all, health and well-being are values shared by almost everyone. Principle V of the AMA Principles of Ethics states: “A physician shall continue to study, apply, and advance scientific knowledge, maintain a commitment to medical education, make relevant information available to patients, colleagues, and the public, obtain consultation, and use the talents of other health professionals when indicated” (italics added). And Principle VII states: “A physician shall recognize a responsibility to participate in activities contributing to the improvement of the community and the betterment of public health” (italics added). Taken together, these principles articulate an ethical obligation to make relevant information available to the public to improve community and public health. In the modern information age, wherein the unconstrained and largely unregulated proliferation of false health information is enabled by the internet and medical knowledge is no longer privileged, these 2 principles have a special weight and relevance.

Thursday, December 13, 2018

Does deciding among morally relevant options feel like making a choice? How morality constrains people’s sense of choice

Kouchaki, M., Smith, I. H., & Savani, K. (2018).
Journal of Personality and Social Psychology, 115(5), 788-804.
http://dx.doi.org/10.1037/pspa0000128

Abstract

We demonstrate that a difference exists between objectively having and psychologically perceiving multiple-choice options of a given decision, showing that morality serves as a constraint on people’s perceptions of choice. Across 8 studies (N = 2,217), using both experimental and correlational methods, we find that people deciding among options they view as moral in nature experience a lower sense of choice than people deciding among the same options but who do not view them as morally relevant. Moreover, this lower sense of choice is evident in people’s attentional patterns. When deciding among morally relevant options displayed on a computer screen, people devote less visual attention to the option that they ultimately reject, suggesting that when they perceive that there is a morally correct option, they are less likely to even consider immoral options as viable alternatives in their decision-making process. Furthermore, we find that experiencing a lower sense of choice because of moral considerations can have downstream behavioral consequences: after deciding among moral (but not nonmoral) options, people (in Western cultures) tend to choose more variety in an unrelated task, likely because choosing more variety helps them reassert their sense of choice. Taken together, our findings suggest that morality is an important factor that constrains people’s perceptions of choice, creating a disjunction between objectively having a choice and subjectively perceiving that one has a choice.

A pdf can be found here.

A choice may not feel like a choice when morality is at play

Susan Kelley
Cornell Chronicle
Originally posted November 15, 2018

Here is an excerpt:

People who viewed the issues as moral – regardless of which side of the debate they stood on – felt less of a sense of choice when faced with the decisions. “In contrast, people who made a decision that was not imbued with morality were more likely to view it as a choice,” Smith said.

The researchers saw this weaker sense of choice play out in the participants’ attention patterns. When deciding among morally relevant options displayed on a computer screen, they devoted less visual attention to the option that they ultimately rejected, suggesting they were less likely to even consider immoral options as viable alternatives in their decision-making, the study said.

Moreover, participants who felt they had fewer options tended to choose more variety later on. After deciding among moral options, the participants tended to opt for more variety when given the choice of seven different types of chocolate in an unrelated task. “It’s a very subtle effect but it’s indicative that people are trying to reassert their sense of autonomy,” Smith said.

Understanding the way that people make morally relevant decisions has implications for business ethics, he said: “If we can figure out what influences people to behave ethically or not, we can better empower managers with tools that might help them reduce unethical behavior in the workplace.”

The info is here.

The original research is here.

Wednesday, December 12, 2018

Social relationships more important than hard evidence in partisan politics

phys.org
Dartmouth College
Originally posted November 13, 2018

Here is an excerpt:

Three factors drive the formation of social and political groups according to the research: social pressure to have stronger opinions, the relationship of an individual's opinions to those of their social neighbors, and the benefits of having social connections.

A key idea studied in the paper is that people choose their opinions and their connections to avoid differences of opinion with their social neighbors. By joining like-minded groups, individuals also prevent the psychological stress, or "cognitive dissonance," of considering opinions that do not match their own.

"Human social tendencies are what form the foundation of that political behavior," said Tucker Evans, a senior at Dartmouth who led the study. "Ultimately, strong relationships can have more value than hard evidence, even for things that some would take as proven fact."

The information is here.

The original research is here.