Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, June 22, 2019

Morality and Self-Control: How They are Intertwined, and Where They Differ

Wilhelm Hofmann, Peter Meindl, Marlon Mooijman, & Jesse Graham
PsyArXiv Preprints
Last edited November 18, 2018

Abstract

Despite sharing conceptual overlap, morality and self-control research have led largely separate lives. In this article, we highlight neglected connections between these major areas of psychology. To this end, we first note their conceptual similarities and differences. We then show how morality research, typically emphasizing aspects of moral cognition and emotion, may benefit from incorporating motivational concepts from self-control research. Similarly, self-control research may benefit from a better understanding of the moral nature of many self-control domains. We place special focus on various components of self-control and on the ways in which self-control goals may be moralized.

(cut)

Here is the Conclusion:

How do we resist temptation, prioritizing our future well-being over our present pleasure? And how do we resist acting selfishly, prioritizing the needs of others over our own self-interest? These two questions highlight the links between understanding self-control and understanding morality. We hope we have shown that morality and self-control share considerable conceptual overlap with regard to the way people regulate behavior in line with higher-order values and standards. As the psychological study of both areas becomes increasingly collaborative and integrated, insights from each subfield can better enable research and interventions to increase human health and flourishing.

The info is here.

Friday, June 21, 2019

Tech, Data And The New Democracy Of Ethics

Neil Lustig
Forbes.com
Originally posted June 10, 2019

As recently as 15 years ago, consumers had no visibility into whether the brands they shopped used overseas slave labor or if multinationals were bribing public officials to give them unfair advantages internationally. Executives could engage in whatever type of misconduct they wanted to behind closed doors, and there was no early warning system for investors, board members and employees, who were directly impacted by the consequences of their behavior.
Now, thanks to globalization, social media, big data, whistleblowers and corporate compliance initiatives, we have more visibility than ever into the organizations and people that affect our lives and our economy.

What we’ve learned from this surge in transparency is that sometimes companies mess up even when they’re not trying to. There’s a distinct difference between companies that deliberately engage in unethical practices and those that get caught up in them due to loose policies, inadequate self-policing or a few bad actors that misrepresent the ethics of the rest of the organization. The primary difference between these two types of companies is how fast they’re able to act -- and if they act at all.

Fortunately, just as technology and data can introduce unprecedented visibility into organizations’ unethical practices, they can also equip organizations with ways of protecting themselves from internal and external risks. As CEO of a compliance management platform, I believe there are three things that must be in place for organizations to stay above board in a rising democracy of ethics.

The info is here.

It's not biology bro: Torture and the Misuse of Science

Shane O'Mara and John Schiemann
PsyArXiv Preprints
Last edited on December 24, 2018

Abstract

Contrary to the (in)famous line in the film Zero Dark Thirty, the CIA's torture program was not based on biology or any other science. Instead, the Bush administration and the CIA decided to use coercion immediately after the 9/11 terrorist attacks and then veneered the program's justification with a patina of pseudoscience, ignoring the actual biology of torturing human brains. We reconstruct the Bush administration’s decision-making process from released government documents, independent investigations, journalistic accounts, and memoirs to establish that the policy decision to use torture took place in the immediate aftermath of the 9/11 attacks without any investigation into its efficacy. We then present the pseudo-scientific model of torture sold to the CIA based on a loose amalgamation of methods from the old KUBARK manual, reverse-engineering of SERE training techniques, and learned helplessness theory, show why this ad hoc model amounted to pseudoscience, and then catalog what the actual science of torturing human brains – available in 2001 – reveals about the practice. We conclude with a discussion of how process of policy-making might incorporate countervailing evidence to ensure that policy problems are forestalled, via the concept of an evidence-based policy brake, which is deliberately instituted to prevent a policy going forward that is contrary to law, ethics and evidence.

The info is here.

Thursday, June 20, 2019

Legal Promise Of Equal Mental Health Treatment Often Falls Short

Graison Dangor
Kaiser Health News
Originally pubished June 7, 2019

Here is an excerpt:

The laws have been partially successful. Insurers can no longer write policies that charge higher copays and deductibles for mental health care, nor can they set annual or lifetime limits on how much they will pay for it. But patient advocates say insurance companies still interpret mental health claims more stringently.

“Insurance companies can easily circumvent mental health parity mandates by imposing restrictive standards of medical necessity,” said Meiram Bendat, a lawyer leading a class-action lawsuit against a mental health subsidiary of UnitedHealthcare.

In a closely watched ruling, a federal court in March sided with Bendat and patients alleging the insurer was deliberately shortchanging mental health claims. Chief Magistrate Judge Joseph Spero of the U.S. District Court for the Northern District of California ruled that United Behavioral Health wrote its guidelines for treatment much more narrowly than common medical standards, covering only enough to stabilize patients “while ignoring the effective treatment of members’ underlying conditions.”

UnitedHealthcare works to “ensure our products meet the needs of our members and comply with state and federal law,” said spokeswoman Tracey Lempner.

Several studies, though, have found evidence of disparities in insurers’ decisions.

The info is here.

Moral Judgment Toward Relationship Betrayals and Those Who Commit Them

Dylan Selterman Amy Moors Sena Koleva
PsyArXiv
Created on January 18, 2019

Abstract

In three experimental studies (total N = 1,056), we examined moral judgments toward relationship betrayals, and how these judgments depended on whether characters and their actions were perceived to be pure and loyal compared to the level of harm caused. In Studies 1 and 2 the focus was confessing a betrayal, while in Study 3 the focus was on the act of sexual infidelity. Perceptions of harm/care were inconsistently and less strongly associated with moral judgment toward the behavior or the character, relative to perceptions of purity and loyalty, which emerged as key predictors of moral judgment across all studies. Our findings demonstrate that a diversity of cognitive factors play a key role in moral perception of relationship betrayals.

Here is part of the Discussion:

Some researchers have argued that perception of a harmed victim is the cognitive prototype by which people conceptualize immoral behavior (Gray et al.,2014).This perspective explains many phenomena within moral psychology.  However, other psychological templates may apply regarding sexual and relational behavior, and that purity and loyalty play a key role in explaining how people arrive at moral judgments toward sexual and relational violations. In conclusion, the current research adds to ongoing and fruitful research regarding the underlying psychological mechanisms involved in moral judgment. Importantly, the current studies extend our knowledge of moral judgments into the context of specific close relationship and sexual contexts that many people experience.

The research is here.

Wednesday, June 19, 2019

The Ethics of 'Biohacking' and Digital Health Data

Sy Mukherjee
Fortune.com
Originally posted June 6, 2019

Here is an excerpt:

Should personal health data ownership be a human right? Do digital health program participants deserve a cut of the profits from the information they provide to genomics companies? How do we get consumers to actually care about the privacy and ethics implications of this new digital health age? Can technology help (and, more importantly, should it have a responsibility to) bridge the persistent gap in representation for women in clinical trials? And how do you design a fair system of data distribution in an age of a la carte genomic editing, leveraged by large corporations, and seemingly ubiquitous data mining from consumers?

Ok, so we didn’t exactly come to definitive conclusions about all that in our limited time. But I look forward to sharing some of our panelists’ insights in the coming days. And I’ll note that, while some of the conversation may have sounded like dystopic cynicism, there was a general consensus that collective regulatory changes, new business models, and a culture of concern for data privacy could help realize the potential of digital health while mitigating its potential problems.

The information and interview are here.

We Need a Word for Destructive Group Outrage

Cass Sunstein
www.Bloomberg.com
Originally posted May 23, 2019

Here are two excerpts:

In the most extreme and horrible situations, lapidation is based on a lie, a mistake or a misunderstanding. People are lapidated even though they did nothing wrong.

In less extreme cases, the transgression is real, and lapidators have a legitimate concern. Their cause is just. They are right to complain and to emphasize that people have been hurt or wronged.

Even so, they might lose a sense of proportion. Groups of people often react excessively to a mistake, an error in judgment, or an admittedly objectionable statement or action. Even if you have sympathy for Harvard’s decision with respect to Sullivan, or Cambridge’s decision with respect to Carl, it is hard to defend the sheer level of rage and vitriol directed at both men.

Lapidation entrepreneurs often have their own agendas. Intentionally or not, they may unleash something horrific – something like the Two Minutes Hate, memorably depicted in George Orwell’s “1984.”

(cut)

What makes lapidation possible? A lot of the answer is provided by the process of “group polarization,” which means that when like-minded people speak with one another, they tend to go to extremes.

Suppose that people begin with the thought that Ronald Sullivan probably should not have agreed to represent Harvey Weinstein, or that Al Franken did something pretty bad. If so, their discussions will probably make them more unified and more confident about those beliefs, and ultimately more extreme.

A key reason involves the dynamics of outrage. Whenever some transgression has occurred, people want to appear at least as appalled as others in their social group. That can transform mere disapproval into lapidation.

The info is here.

Tuesday, June 18, 2019

A tech challenge? Fear not, many AI issues boil down to ethics

Peter Montagnon
www.ft.com
Originally posted June 3, 2019

Here is an excerpt:

Ethics are particularly important when technology enters the governance agenda. Machines may be capable of complex calculation but they are so far unable to make qualitative or moral judgments.

Also, the use and manipulation of a massive amount of data creates an information asymmetry. This confers power on those who control it at the potential expense of those who are the subject of it.

Ultimately there must always be human accountability for the decisions that machines originate.

In the corporate world, the board is where accountability resides. No one can escape this. To exercise their responsibilities, directors do not need to be as expert as tech teams. For sure, they need to be familiar with the scope of technology used by their companies, what it can and cannot do, and where the risks and opportunities lie.

For that they may need trustworthy advice from either the chief technology officer or external experts, but the decisions will generally be about what is acceptable and what is not.

The risks may well be of a human rather than a tech kind. With the motor industry, one risk with semi-automated vehicles is that the owners of such cars will think they can do more on autopilot than they can. It seems most of us are bad at reading instructions and will need clear warnings, perhaps to the point where the car may even seem disappointing.

The info is here.


Psychologists Mitchell and Jessen called to testify about ‘torture’ techniques in 9/11 tribunals

Thomas Clouse
www.spokesman.com
Originally posted May 20, 2019

Two Spokane psychologists who devised the “enhanced interrogation” techniques that a federal judge later said constituted torture could testify publicly for the first time at a military tribunal at Guantanamo Bay, Cuba, that is trying five men charged with helping to plan and assist in the 9/11 attacks.

James E. Mitchell and John “Bruce” Jessen are among a dozen government-approved witnesses for the defense at the military tribunal. Mitchell and Jessen’s company was paid about $81 million by the CIA for providing and sometimes carrying out the interrogation techniques, which included waterboarding, during the early days of the post 9/11 war on terror.

“This will be the first time Dr. Mitchell and Dr. Jessen will have to testify in a criminal proceeding about the torture program they implemented,” said James Connell, a lawyer for Ammar al Baluchi, one of the five Guantanamo prisoners.

Both Mitchell and Jessen were deposed but were never forced to testify as part of a civil suit filed in 2015 in Spokane by the ACLU on behalf of three former CIA prisoners, Gul Rahman, Suleiman Abdullah Salim and Mohamed Ahmed Ben Soud.

According to court records, Rahman was interrogated in a dungeon-like Afghanistan prison in isolation, subjected to darkness and extreme cold water, and eventually died of hypothermia. The other two men are now free.

The U.S. government settled that civil suit in August 2017 just weeks before it was scheduled for trial in Spokane before U.S. District Court Judge Justin Quackenbush.

The info is here.