Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, June 23, 2019

On the belief that beliefs should change according to evidence: Implications for conspiratorial, moral, paranormal, political, religious, and science beliefs

Gordon Pennycook, James Allan Cheyne, Derek Koehler, & Jonathan Fugelsang
PsyAirXiv PrePrints - Last edited on May 24, 2019

Abstract

Does one’s stance toward evidence evaluation and belief revision have relevance for actual beliefs? We investigate the role of having an actively open-minded thinking style about evidence (AOT-E) on a wide range of beliefs, values, and opinions. Participants indicated the extent to which they think beliefs (Study 1) or opinions (Studies 2 and 3) ought to change according to evidence on an 8-item scale. Across three studies with 1,692 participants from two different sources (Mechanical Turk and Lucid for Academics), we find that our short AOT-E scale correlates negatively with beliefs about topics ranging from extrasensory perception, to respect for tradition, to abortion, to God; and positively with topics ranging from anthropogenic global warming to support for free speech on college campuses. More broadly, the belief that beliefs should change according to evidence was robustly associated with political liberalism, the rejection of traditional moral values, the acceptance of science, and skepticism about religious, paranormal, and conspiratorial claims. However, we also find that AOT-E is much more strongly predictive for political liberals (Democrats) than conservatives (Republicans). We conclude that socio-cognitive theories of belief (both specific and general) should take into account people’s beliefs about when and how beliefs should change – that is, meta-beliefs – but that further work is required to understand how meta-beliefs about evidence interact with political ideology.

Conclusion

Our 8-item actively open-minded thinking about evidence (AOT-E) scale was strongly predictive of a wide range of beliefs, values, and opinions. People who reported believing that beliefs and opinions should change according to evidence were less likely to be religious, less likely to hold paranormal and conspiratorial beliefs, more likely to believe in a variety of scientific claims, and were more political liberal (in terms of overall ideology, partisan affiliation, moral values, and a variety of specific political opinions). Moreover, the effect sizes for these correlations was often large or very large, based on established norms (Funder & Ozer, 2019; Gignac & Szodorai, 2016). The size and diversity of AOT-E correlates strongly supports one major, if broad, conclusion: Socio-cognitive theories of belief (both specific and general) should take into account what people believe about when and how beliefs and opinions should change (i.e., meta-beliefs). That is, we should not assume that evidence is equally important for everyone. However, future work is required to more clearly delineate why AOT-E is more predictive for political liberals than conservatives.

A preprint can be downloaded here.

Saturday, June 22, 2019

Morality and Self-Control: How They are Intertwined, and Where They Differ

Wilhelm Hofmann, Peter Meindl, Marlon Mooijman, & Jesse Graham
PsyArXiv Preprints
Last edited November 18, 2018

Abstract

Despite sharing conceptual overlap, morality and self-control research have led largely separate lives. In this article, we highlight neglected connections between these major areas of psychology. To this end, we first note their conceptual similarities and differences. We then show how morality research, typically emphasizing aspects of moral cognition and emotion, may benefit from incorporating motivational concepts from self-control research. Similarly, self-control research may benefit from a better understanding of the moral nature of many self-control domains. We place special focus on various components of self-control and on the ways in which self-control goals may be moralized.

(cut)

Here is the Conclusion:

How do we resist temptation, prioritizing our future well-being over our present pleasure? And how do we resist acting selfishly, prioritizing the needs of others over our own self-interest? These two questions highlight the links between understanding self-control and understanding morality. We hope we have shown that morality and self-control share considerable conceptual overlap with regard to the way people regulate behavior in line with higher-order values and standards. As the psychological study of both areas becomes increasingly collaborative and integrated, insights from each subfield can better enable research and interventions to increase human health and flourishing.

The info is here.

Friday, June 21, 2019

Tech, Data And The New Democracy Of Ethics

Neil Lustig
Forbes.com
Originally posted June 10, 2019

As recently as 15 years ago, consumers had no visibility into whether the brands they shopped used overseas slave labor or if multinationals were bribing public officials to give them unfair advantages internationally. Executives could engage in whatever type of misconduct they wanted to behind closed doors, and there was no early warning system for investors, board members and employees, who were directly impacted by the consequences of their behavior.
Now, thanks to globalization, social media, big data, whistleblowers and corporate compliance initiatives, we have more visibility than ever into the organizations and people that affect our lives and our economy.

What we’ve learned from this surge in transparency is that sometimes companies mess up even when they’re not trying to. There’s a distinct difference between companies that deliberately engage in unethical practices and those that get caught up in them due to loose policies, inadequate self-policing or a few bad actors that misrepresent the ethics of the rest of the organization. The primary difference between these two types of companies is how fast they’re able to act -- and if they act at all.

Fortunately, just as technology and data can introduce unprecedented visibility into organizations’ unethical practices, they can also equip organizations with ways of protecting themselves from internal and external risks. As CEO of a compliance management platform, I believe there are three things that must be in place for organizations to stay above board in a rising democracy of ethics.

The info is here.

It's not biology bro: Torture and the Misuse of Science

Shane O'Mara and John Schiemann
PsyArXiv Preprints
Last edited on December 24, 2018

Abstract

Contrary to the (in)famous line in the film Zero Dark Thirty, the CIA's torture program was not based on biology or any other science. Instead, the Bush administration and the CIA decided to use coercion immediately after the 9/11 terrorist attacks and then veneered the program's justification with a patina of pseudoscience, ignoring the actual biology of torturing human brains. We reconstruct the Bush administration’s decision-making process from released government documents, independent investigations, journalistic accounts, and memoirs to establish that the policy decision to use torture took place in the immediate aftermath of the 9/11 attacks without any investigation into its efficacy. We then present the pseudo-scientific model of torture sold to the CIA based on a loose amalgamation of methods from the old KUBARK manual, reverse-engineering of SERE training techniques, and learned helplessness theory, show why this ad hoc model amounted to pseudoscience, and then catalog what the actual science of torturing human brains – available in 2001 – reveals about the practice. We conclude with a discussion of how process of policy-making might incorporate countervailing evidence to ensure that policy problems are forestalled, via the concept of an evidence-based policy brake, which is deliberately instituted to prevent a policy going forward that is contrary to law, ethics and evidence.

The info is here.

Thursday, June 20, 2019

Legal Promise Of Equal Mental Health Treatment Often Falls Short

Graison Dangor
Kaiser Health News
Originally pubished June 7, 2019

Here is an excerpt:

The laws have been partially successful. Insurers can no longer write policies that charge higher copays and deductibles for mental health care, nor can they set annual or lifetime limits on how much they will pay for it. But patient advocates say insurance companies still interpret mental health claims more stringently.

“Insurance companies can easily circumvent mental health parity mandates by imposing restrictive standards of medical necessity,” said Meiram Bendat, a lawyer leading a class-action lawsuit against a mental health subsidiary of UnitedHealthcare.

In a closely watched ruling, a federal court in March sided with Bendat and patients alleging the insurer was deliberately shortchanging mental health claims. Chief Magistrate Judge Joseph Spero of the U.S. District Court for the Northern District of California ruled that United Behavioral Health wrote its guidelines for treatment much more narrowly than common medical standards, covering only enough to stabilize patients “while ignoring the effective treatment of members’ underlying conditions.”

UnitedHealthcare works to “ensure our products meet the needs of our members and comply with state and federal law,” said spokeswoman Tracey Lempner.

Several studies, though, have found evidence of disparities in insurers’ decisions.

The info is here.

Moral Judgment Toward Relationship Betrayals and Those Who Commit Them

Dylan Selterman Amy Moors Sena Koleva
PsyArXiv
Created on January 18, 2019

Abstract

In three experimental studies (total N = 1,056), we examined moral judgments toward relationship betrayals, and how these judgments depended on whether characters and their actions were perceived to be pure and loyal compared to the level of harm caused. In Studies 1 and 2 the focus was confessing a betrayal, while in Study 3 the focus was on the act of sexual infidelity. Perceptions of harm/care were inconsistently and less strongly associated with moral judgment toward the behavior or the character, relative to perceptions of purity and loyalty, which emerged as key predictors of moral judgment across all studies. Our findings demonstrate that a diversity of cognitive factors play a key role in moral perception of relationship betrayals.

Here is part of the Discussion:

Some researchers have argued that perception of a harmed victim is the cognitive prototype by which people conceptualize immoral behavior (Gray et al.,2014).This perspective explains many phenomena within moral psychology.  However, other psychological templates may apply regarding sexual and relational behavior, and that purity and loyalty play a key role in explaining how people arrive at moral judgments toward sexual and relational violations. In conclusion, the current research adds to ongoing and fruitful research regarding the underlying psychological mechanisms involved in moral judgment. Importantly, the current studies extend our knowledge of moral judgments into the context of specific close relationship and sexual contexts that many people experience.

The research is here.

Wednesday, June 19, 2019

The Ethics of 'Biohacking' and Digital Health Data

Sy Mukherjee
Fortune.com
Originally posted June 6, 2019

Here is an excerpt:

Should personal health data ownership be a human right? Do digital health program participants deserve a cut of the profits from the information they provide to genomics companies? How do we get consumers to actually care about the privacy and ethics implications of this new digital health age? Can technology help (and, more importantly, should it have a responsibility to) bridge the persistent gap in representation for women in clinical trials? And how do you design a fair system of data distribution in an age of a la carte genomic editing, leveraged by large corporations, and seemingly ubiquitous data mining from consumers?

Ok, so we didn’t exactly come to definitive conclusions about all that in our limited time. But I look forward to sharing some of our panelists’ insights in the coming days. And I’ll note that, while some of the conversation may have sounded like dystopic cynicism, there was a general consensus that collective regulatory changes, new business models, and a culture of concern for data privacy could help realize the potential of digital health while mitigating its potential problems.

The information and interview are here.

We Need a Word for Destructive Group Outrage

Cass Sunstein
www.Bloomberg.com
Originally posted May 23, 2019

Here are two excerpts:

In the most extreme and horrible situations, lapidation is based on a lie, a mistake or a misunderstanding. People are lapidated even though they did nothing wrong.

In less extreme cases, the transgression is real, and lapidators have a legitimate concern. Their cause is just. They are right to complain and to emphasize that people have been hurt or wronged.

Even so, they might lose a sense of proportion. Groups of people often react excessively to a mistake, an error in judgment, or an admittedly objectionable statement or action. Even if you have sympathy for Harvard’s decision with respect to Sullivan, or Cambridge’s decision with respect to Carl, it is hard to defend the sheer level of rage and vitriol directed at both men.

Lapidation entrepreneurs often have their own agendas. Intentionally or not, they may unleash something horrific – something like the Two Minutes Hate, memorably depicted in George Orwell’s “1984.”

(cut)

What makes lapidation possible? A lot of the answer is provided by the process of “group polarization,” which means that when like-minded people speak with one another, they tend to go to extremes.

Suppose that people begin with the thought that Ronald Sullivan probably should not have agreed to represent Harvey Weinstein, or that Al Franken did something pretty bad. If so, their discussions will probably make them more unified and more confident about those beliefs, and ultimately more extreme.

A key reason involves the dynamics of outrage. Whenever some transgression has occurred, people want to appear at least as appalled as others in their social group. That can transform mere disapproval into lapidation.

The info is here.

Tuesday, June 18, 2019

A tech challenge? Fear not, many AI issues boil down to ethics

Peter Montagnon
www.ft.com
Originally posted June 3, 2019

Here is an excerpt:

Ethics are particularly important when technology enters the governance agenda. Machines may be capable of complex calculation but they are so far unable to make qualitative or moral judgments.

Also, the use and manipulation of a massive amount of data creates an information asymmetry. This confers power on those who control it at the potential expense of those who are the subject of it.

Ultimately there must always be human accountability for the decisions that machines originate.

In the corporate world, the board is where accountability resides. No one can escape this. To exercise their responsibilities, directors do not need to be as expert as tech teams. For sure, they need to be familiar with the scope of technology used by their companies, what it can and cannot do, and where the risks and opportunities lie.

For that they may need trustworthy advice from either the chief technology officer or external experts, but the decisions will generally be about what is acceptable and what is not.

The risks may well be of a human rather than a tech kind. With the motor industry, one risk with semi-automated vehicles is that the owners of such cars will think they can do more on autopilot than they can. It seems most of us are bad at reading instructions and will need clear warnings, perhaps to the point where the car may even seem disappointing.

The info is here.