Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, November 7, 2019

Digital Ethics and the Blockchain

Dan Blum
ISACA, Volume 2, 2018

Here is an excerpt:

Integrity and Transparency

Integrity and transparency are core values for delivering trust to prosperous markets. Blockchains can provide immutable land title records to improve property rights and growth in small economies, such as Honduras.6 In smart power grids, blockchain-enabled meters can replace inefficient centralized record-keeping systems for transparent energy trading. Businesses can keep transparent records for product provenance, production, distribution and sales. Forward-thinking governments are exploring use cases through which transparent, immutable blockchains could facilitate a lighter, more effective regulatory touch to holding industry accountable.

However, trade secrets and personal information should not be published openly on blockchains. Blockchain miners may reorder transactions to increase fees or delay certain business processes at the expense of others. Architects must leaven accountability and transparency with confidentiality and privacy. Developers (or regulators) should sometimes add a human touch to smart contracts to avoid rigid systems operating without any consumer safeguards.

The info is here.

Are We Causing Moral Injury to Our Physician Workforce?

Carolyn Meltzer
theneuroethicsblog.com
Originally posted November 5, 2019

Here is an excerpt:

The term moral injury was coined by psychiatrist Jonathan Shay, MD PhD, who, while working at a Veterans Affairs hospital, noted that moral injury is present when 1) there is a betrayal of what is considered morally correct, 2) by someone who holds legitimate authority (conceptualized by Shay as “leadership malpractice”), and 3) in a high-stakes situation (Shay and Monroe 1998). Nash and Little (2013) went on to propose a model that identified the types of war-zone events that contributed to moral injury as witnessing events that are morally wrong (or strongly contradicted one’s own moral code), acting in ways that violate moral values, or feeling betrayed by those who were once trusted. In a fascinating study using the Moral Injury Event Scale and resting-state functional magnetic resonance imaging (fMRI), Sun and colleagues (2019) were able to discern a distinct pattern of altered functional neural connectivity in soldiers exposed to morally injurious events. In fact, functional connectivity between the left inferior parietal lobule and bilateral precuneus was positively related with the soldiers’ post-traumatic stress disorder (PTSD) symptoms and negatively related with scores on the Moral Injury Event Scale.

Moral injury has been recently applied as a construct for physician burnout. Those who argue for this framework propose that structural and cultural factors have contributed to physician burden by undervaluing physicians and over-relying on financial metrics (such as relative value units, RVUs) as the primary surrogate of physician productivity (Nurok and Gewertz 2019). Turner (2019) recently compared the military experience to that of physician providers. While one may draw similarities between the front line of healthcare delivery and that experienced by soldiers, Turner argues that a fundamental tenet of military leadership - that leaders eat last – provides effective support for the health of the workforce. In increasingly large healthcare organizations managed by administrators who may be distant from the front line and reliant on metrics of productivity, the necessary sense of empathy and support from leadership can seem lacking.

The info is here.

Wednesday, November 6, 2019

Insurance companies aren’t doctors. So why do we keep letting them practice medicine?

(iStock) (Minerva Studio/iStock)William E. Bennett Jr.
The Washington Post
Originally posted October 22, 2019

Here are two excerpts:

Here’s the thing: After a few minutes of pleasant chat with a doctor or pharmacist working for the insurance company, they almost always approve coverage and give me an approval number. There’s almost never a back-and-forth discussion; it’s just me saying a few key words to make sure the denial is reversed.

Because it ends up with the desired outcome, you might think this is reasonable. It’s not. On most occasions the “peer” reviewer is unqualified to make an assessment about the specific services.

They usually have minimal or incorrect information about the patient.

Not one has examined or spoken with the patient, as I have.

None of them have a long-term relationship with the patient and family, as I have.

The insurance company will say this system makes sure patients get the right medications. It doesn’t. It exists so that many patients will fail to get the medications they need.

(cut)

This is a system that saves insurance companies money by reflexively denying medical care that has been determined necessary by a physician.

And it should come as no surprise that denials have a disproportionate effect on vulnerable patient populations, such as sexual-minority youths and cancer patients.

We can do better. If physicians order too many expensive tests or drugs, there are better ways to improve their performance and practice, such as quality-improvement initiatives through electronic medical records.

When an insurance company reflexively denies care and then makes it difficult to appeal that denial, it is making health-care decisions for patients.

The info is here.

How to operationalize AI ethics

Khari Johnson
venturebeat.com
Originally published October 7, 2019

Here is an excerpt:

Tools, frameworks, and novel approaches

One of Thomas’ favorite AI ethics resources comes from the Markkula Center for Applied Ethics at Santa Clara University: a toolkit that recommends a number of processes to implement.

“The key one is ethical risk sweeps, periodically scheduling times to really go through what could go wrong and what are the ethical risks. Because I think a big part of ethics is thinking through what can go wrong before it does and having processes in place around what happens when there are mistakes or errors,” she said.

To root out bias, Sobhani recommends the What-If visualization tool from Google’s People + AI Research (PAIR) initiative as well as FairTest, a tool for “discovering unwarranted association within data-driven applications” from academic institutions like EPFL and Columbia University. She also endorses privacy-preserving AI techniques like federated learning to ensure better user privacy.

In addition to resources recommended by panelists, Algorithm Watch maintains a running list of AI ethics guidelines. Last week, the group found that guidelines released in March 2018 by IEEE, the world’s largest association for professional engineers, have seen little adoption at Facebook and Google.

The info is here.

Tuesday, November 5, 2019

Moral Enhancement: A Realistic Approach

Greg Conan
British Medical Journal Blogs
Originally published August 29, 2019

Here is an excerpt:

If you could take a pill to make yourself a better person, would you do it? Could you justifiably make someone else do it, even if they do not want to?

When presented so simplistically, the idea might seem unrealistic or even impossible. The concepts of “taking a pill” and “becoming a better person” seem to belong to different categories. But many of the traits commonly considered to make one a “good person”—such as treating others fairly and kindly without violence—are psychological traits strongly influenced by neurobiology, and neurobiology c
an be changed using medicine. So when and how, if ever, should medicine be used to improve moral character?

Moral bioenhancement (MBE), the concept of improving moral character using biomedical technology, has fascinated me for years—especially once I learned that it has been hotly debated in the bioethics literature since 2008. I have greatly enjoyed diving into the literature to learn about how the concept has been analyzed and presented. Much of the debate has focused on its most abstract topics, like defining its terms and relating MBE to freedom. Although my fondness for analytic philosophy means that I cannot condemn anyone for working to examine ideas with maximum clarity and specificity, any MBE proponent who actually wants MBE to be implemented must focus on realistic methods.

The info is here.

Will Robots Wake Up?

Susan Schneider
orbitermag.com
Originally published September 30, 2019

Machine consciousness, if it ever exists, may not be found in the robots that tug at our heartstrings, like R2D2. It may instead reside in some unsexy server farm in the basement of a computer science building at MIT. Or perhaps it will exist in some top-secret military program and get snuffed out, because it is too dangerous or simply too inefficient.

AI consciousness likely depends on phenomena that we cannot, at this point, gauge—such as whether some microchip yet to be invented has the right configuration, or whether AI developers or the public want conscious AI. It may even depend on something as unpredictable as the whim of a single AI designer, like Anthony Hopkins’s character in Westworld. The uncertainty we face moves me to a middle-of-the-road position, one that stops short of either techno-optimism (believing that technology can solve our problems) or biological naturalism.

This approach I call, simply, the “Wait and See Approach.”

In keeping with my desire to look at real-world considerations that speak to whether AI consciousness is even compatible with the laws of nature—and, if so, whether it is technologically feasible or even interesting to build—my discussion draws from concrete scenarios in AI research and cognitive science.

The info is here.

Monday, November 4, 2019

Ethical Algorithms: Promise, Pitfalls and a Path Forward

Image result for ethical algorithmJay Van Bavel, Tessa West, Enrico Bertini, and Julia Stoyanovich
PsyArXiv Preprints
Originally posted October 21, 2019

Abstract

Fairness in machine-assisted decision making is critical to consider, since a lack of fairness can harm individuals or groups, erode trust in institutions and systems, and reinforce structural discrimination. To avoid making ethical mistakes, or amplifying them, it is important to ensure that the algorithms we develop are fair and promote trust. We argue that the marriage of techniques from behavioral science and computer science is essential to develop algorithms that make ethical decisions and ensure the welfare of society. Specifically, we focus on the role of procedural justice, moral cognition, and social identity in promoting trust in algorithms and offer a road map for future research on the topic.

--

The increasing role of machine-learning and algorithms in decision making has revolutionized areas ranging from the media to medicine to education to industry. As the recent One Hundred Year Study on Artificial Intelligence (Stone et al., 2016) reported: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Therefore, the effective development and widespread adoption of algorithms will hinge not only on the sophistication of engineers and computer scientists, but also on the expertise of behavioural scientists.

These algorithms hold enormous promise for solving complex problems, increasing efficiency, reducing bias, and even making decision-making transparent. However, the last few decades of behavioral science have established that humans hold a number of biases and shortcomings that impact virtually every sphere of human life (Banaji& Greenwald, 2013) and discrimination can become entrenched, amplified, or even obscured when decisions are implemented by algorithms (Kleinberg, Ludwig, Mullainathan, & Sunstein, 2018). While there has been a growing awareness that programmers and organizations should pay greater attention to discrimination and other ethical considerations (Dignum, 2018), very little behavioral research has directly examined these issues.  In  this  paper,  we  describe  how  behavioural  science  will  play  a  critical  role  in  the development  of  ethical  algorithms  and  outline  a  roadmap  for behavioural  scientists  and computer scientists to ensure that these algorithms are as ethical as possible.

The paper is here.

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Image result for karmic accountingJohnson, S. G. B., & Ahn, J.
(2019, September 10).
PsyArXiv
https://doi.org/10.31234/osf.io/xetwg

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

Here is the beginning of the General Discussion:

Much  of  our  behavior  is  tinged  with  shades  of  morality. How  third-parties  judge  those behaviors has numerous social consequences: People judged as behaving immorally can be socially ostracized,  less  interpersonally  attractive,  and  less  able  to  take  advantage  of  win–win  agreements. Indeed, our desire to avoid ignominy and maintain our moral reputations motivate much of our social behavior. But on the other hand, moral judgment is subject to a variety of heuristics and biases that appear  to  violate  normative  moral  theories  and  lead  to  inconsistency  (Bartels,  Bauman,  Cushman, Pizarro, & McGraw, 2015; Sunstein, 2005).  Despite the dominating influence of moral judgment in everyday social cognition, little is known about how judgments of individual acts scale up into broader judgments  about  sequences  of  actions,  such  as  moral  offsetting  (a  morally  bad  act  motivates  a subsequent morally good act) or self-licensing (a morally good act motivates a subsequent morally bad act). That is, we need a theory of karmic accounting—how rights and wrongs add up in moral judgment.

Sunday, November 3, 2019

The Sex Premium in Religiously Motivated Moral Judgment

Image result for sexual behavior moralityLiana Hone, Thomas McCauley, Eric Pedersen,
Evan Carter, and Michael McCullough
PsyArXiv Preprints

Abstract

Religion encourages people to reason about moral issues deontologically rather than on the basis of the perceived consequences of specific actions. However, recent theorizing suggests that religious people’s moral convictions are actually quite strategic (albeit unconsciously so), designed to make their worlds more amenable to their favored approaches to solving life’s basic challenges. In six experiments, we find that religious cognition places a “sex premium” on moral judgments, causing people to judge violations of conventional sexual morality as particularly objectionable. The sex premium is especially strong among highly religious people, and applies to both legal and illegal acts. Religion’s influence on moral reasoning, even if deontological, emphasizes conventional sexual norms, and may reflect the strategic projects to which religion has been applied throughout history.

From the Discussion

How does the sex premium in religiously motivated moral judgment arise during development? We see three plausible pathways. First, society’s vectors for religious cultural learning may simply devote more attention to sex and reproduction than to prosociality when they seek to influence others’ moral stances. Conservative preachers, for instance, devote more time to issues of sexual purity than do liberal preachers, and religious parents discuss the morality of sex with their children more frequently than do less religious parents, even though they discuss sex with their children less frequently overall. Second, strong emotions facilitate cultural learning by improving attention, memory, and motivation, and few human experiences generate stronger emotions than do sex and reproduction. If the emotions that regulate sexual attraction, arousal, and avoidance (e.g., sexual disgust) are stronger than those that regulate prosocial behavior (e.g., empathy; moralistic anger), then the sex premium documented here may emerge from the fact that religiously motivated sexual moralists can create more powerful cultural learning experiences than prosocial moralists can.  Finally, given the extreme importance of sex and reproduction to fitness, the children of religiously adherent adults may observe that violations of local sexual standards to evoke greater moral outrage and condemnation from third parties than do violations of local standards for prosocial behavior.

The research is here.