Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, September 23, 2020

Do Conflict of Interest Disclosures Facilitate Public Trust?

D. M. Cain, & M. Banker
AMA J Ethics. 2020;22(3): E232-238.
doi: 10.1001/amajethics.2020.232.

Abstract

Lab experiments disagree on the efficacy of disclosure as a remedy to conflicts of interest (COIs). Some experiments suggest that disclosure has perverse effects, although others suggest these are mitigated by real-world factors (eg, feedback, sanctions, norms). This article argues that experiments reporting positive effects of disclosure often lack external validity: disclosure works best in lab experiments that make it unrealistically clear that the one disclosing is intentionally lying. We argue that even disclosed COIs remain dangerous in settings such as medicine where bias is often unintentional rather than the result of intentional corruption, and we conclude that disclosure might not be the panacea many seem to take it to be.

Introduction

While most medical professionals have the best intentions, conflicts of interest (COIs) can unintentionally bias their advice. For example, physicians might have consulting relationships with a company whose product they might prescribe. Physicians are increasingly required to limit COIs and disclose any that exist. When regulators decide whether to let a COI stand, the question becomes: How well does disclosure work? This paper reviews laboratory experiments that have had mixed results on the effects of disclosing COIs on bias and suggests that studies purporting to provide evidence of the efficacy of disclosure often lack external validity. We conclude that disclosure works more poorly than regulators hope; thus, COIs are more problematic than expected.

The info is here.

Tuesday, September 22, 2020

How to be an ethical scientist

W. A. Cunningham, J. J. Van Bavel,
& L. H. Somerville
Science Magazine
Originally posted 5 August 20

True discovery takes time, has many stops and starts, and is rarely neat and tidy. For example, news that the Higgs boson was finally observed in 2012 came 48 years after its original proposal by Peter Higgs. The slow pace of science helps ensure that research is done correctly, but it can come into conflict with the incentive structure of academic progress, as publications—the key marker of productivity in many disciplines—depend on research findings. Even Higgs recognized this problem with the modern academic system: “Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.”

It’s easy to forget about the “long view” when there is constant pressure to produce. So, in this column, we’re going to focus on the type of long-term thinking that advances science. For example, are you going to cut corners to get ahead, or take a slow, methodical approach? What will you do if your experiment doesn’t turn out as expected? Without reflecting on these deeper issues, we can get sucked into the daily goals necessary for success while failing to see the long-term implications of our actions.

Thinking carefully about these issues will not only impact your own career outcomes, but it can also impact others. Your own decisions and actions affect those around you, including your labmates, your collaborators, and your academic advisers. Our goal is to help you avoid pitfalls and find an approach that will allow you to succeed without impairing the broader goals of science.

Be open to being wrong

Science often advances through accidental (but replicable) findings. The logic is simple: If studies always came out exactly as you anticipated, then nothing new would ever be learned. Our previous theories of the world would be just as good as they ever were. This is why scientific discovery is often most profound when you stumble on something entirely new. Isaac Asimov put it best when he said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny ... .’”

The info is here.

Monday, September 21, 2020

The ethics of pausing a vaccine trial in the midst of a pandemic

Patrick Skerrett
statnews.com
Originally posted 11 Sept 20

Here is an excerpt:

Is the process for clinical trials of vaccines different from the process for drug or device trials?

Mostly no. The principles, design, and basic structure of a vaccine trial are more or less the same as for a trial for a new medication. The research ethics considerations are also similar.

The big difference between the two is that the participants in a preventive vaccine trial are, by and large, healthy people — or at least they are people who don’t have the illness for which the agent being tested might be effective. That significantly heightens the risk-benefit calculus for the participants.

Of course, some people in a Covid-19 vaccine trial could personally benefit if they live in communities with a lot of Covid-19. But even then, they might never get it. That’s very different than a trial in which individuals have a condition, say melanoma or malignant hypertension, and they are taking part in a trial of a therapy that could improve or even cure their condition.

Does that affect when a company might stop a trial?

In every clinical trial, the data and safety monitoring board takes routine and prescheduled looks at the accumulated data. They are checking mainly for two things: signals of harm and evidence of effectiveness.

These boards will recommend stopping a trial if they see a signal of concern or harm. They may do the same thing if they see solid evidence that people in the active arm of the trial are doing far better than those in the control arm.

In both cases, the action is taken on behalf of those participating in the trial. But it is also taken to advance the interests of people who would get this intervention if it was to be made publicly available.

The current situation with AstraZeneca involves a signal of concern. The company’s first obligation is to the participants in the trial. It cannot ethically proceed with the trial if there is reason for concern, even based on the experience of one participant.

Changing morals: we’re more compassionate than 100 years ago, but more judgmental too

N. Haslam, M. J. McGrady, & M. A. Wheeler
The Conversation
Originally published 4 March 19

Here is an excerpt:

Differently moral

We found basic moral terms (see the black line below) became dramatically scarcer in English-language books as the 20th century unfolded – which fits the de-moralisation narrative. But an equally dramatic rebound began in about 1980, implying a striking re-moralisation.

The five moral foundations, on the other hand, show a vastly changing trajectory. The purity foundation (green line) shows the same plunge and rebound as the basic moral terms. Ideas of sacredness, piety and purity, and of sin, desecration and indecency, fell until about 1980, and rose afterwards.

The other moralities show very different pathways. Perhaps surprisingly, the egalitarian morality of fairness (blue) showed no consistent rise or fall.

In contrast, the hierarchy-based morality of authority (grey) underwent a gentle decline for the first half of the century. It then sharply rose as the gathering crisis of authority shook the Western world in the late 1960s. This morality of obedience and conformity, insubordination and rebellion, then receded equally sharply through the 1970s.

Ingroup morality (orange), reflected in the communal language of loyalty and unity, insiders and outsiders, displays the clearest upward trend through the 20th century. Discernible bumps around the two world wars point to passing elevations in the “us and them” morality of threatened communities.

Finally, harm-based morality (red) presents a complex but intriguing trend. Its prominence falls from 1900 to the 1970s, interrupted by similar wartime bumps when themes of suffering and destruction became understandably urgent. But harm rises steeply from about 1980 in the absence of a single dominating global conflict.

The info is here.

Sunday, September 20, 2020

Financial Conflicts of Interest are of Higher Ethical Priority than “Intellectual” Conflicts of Interest

Goldberg, D.S.
Bioethical Inquiry 17, 217–227 (2020).
https://doi.org/10.1007/s11673-020-09989-4

Abstract

The primary claim of this paper is that intellectual conflicts of interest (COIs) exist but are of lower ethical priority than COIs flowing from relationships between health professionals and commercial industry characterized by financial exchange. The paper begins by defining intellectual COIs and framing them in the context of scholarship on non-financial COIs. However, the paper explains that the crucial distinction is not between financial and non-financial COIs but is rather between motivations for bias that flow from relationships and those that do not. While commitments to particular ideas or perspectives can cause all manner of cognitive bias, that fact does not justify denying the enormous power that relationships featuring pecuniary gain have on professional behaviour in term of care, policy, or both. Sufficient reason exists to take both intellectual COIs and financial COIs seriously, but this paper demonstrates why the latter is of higher ethical priority. Multiple reasons will be provided, but the primary rationale grounding the claim is that intellectual COIs may provide reasons to suspect cognitive bias but they do not typically involve a loss of trust in a social role. The same cannot be said for COIs flowing from relationships between health professionals and commercial industries involving financial exchange. The paper then assumes arguendo that the primary rationale is mistaken and proceeds to show why the claims that intellectual COIs are more significant than relationship-based COIs are dubious on their own merits. The final section of the paper summarizes and concludes.

Conclusion

iCOIs exist and they should be taken seriously. Nevertheless, fCOIs are of greater ethical priority. The latter diminish trust in a social role to a much greater extent than do the former, at least in the broad run of cases. Moreover, it is not clear how providers could avoid developing intellectual commitments and preferences regarding particular therapeutic modalities or interventions—and even if we could prevent this from occurring, it is far from evident that we should. We can easily imagine cases where a studied determination to remain neutral regarding interventions would be an abdication of moral responsibility, would be decidedly unvirtuous, and would likely result in harm to care- and service-seekers. While we also have evidence that some intellectual commitments can motivate bias in ways that likely result in harm to care- or service-seekers, this premise only justifies taking iCOIs seriously—it is literally no argument for deprioritizing fCOIs. Although the fact that iCOIs are in many cases unavoidable is a weak justification for ignoring iCOIs, the comparable avoidability of the vast majority of fCOIs is indeed a reason for prioritizing the latter over the former.

A pdf is here.

Saturday, September 19, 2020

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Pratyusha Kalluri
nature.com
Originally posted 7 July 20

Here is an excerpt:

Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. For example, the advocacy group Our Data Bodies is putting forward ways to protect personal data when interacting with US fair-housing and child-protection services. Such work gets little attention. Meanwhile, mainstream research is creating systems that are extraordinarily expensive to train, further empowering already powerful institutions, from Amazon, Google and Facebook to domestic surveillance and military programmes.

Many researchers have trouble seeing their intellectual work with AI as furthering inequity. Researchers such as me spend our days working on what are, to us, mathematically beautiful and useful systems, and hearing of AI success stories, such as winning Go championships or showing promise in detecting cancer. It is our responsibility to recognize our skewed perspective and listen to those impacted by AI.

Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful. Organizations have responded with pledges to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes mitigate harm, but are controlled by powerful institutions with their own agendas. At best, they are unreliable; at worst, they masquerade as ‘ethics-washing’ technologies that still perpetuate inequity.

Already, some researchers are exposing hidden limitations and failures of systems. They braid their research findings with advocacy for AI regulation. Their work includes critiquing inadequate technological ‘fixes’. Other researchers are explaining to the public how natural resources, data and human labour are extracted to create AI.

The info is here.

Friday, September 18, 2020

Cognitive Barriers to Reducing Income Inequality

Jackson, J. C., & Payne, K. (2020).
Social Psychological and Personality Science. 
https://doi.org/10.1177/1948550620934597

Abstract

As economic inequality grows, more people stand to benefit from wealth redistribution. Yet in many countries, increasing inequality has not produced growing support for redistribution, and people often appear to vote against their economic interest. Here we suggest that two cognitive tendencies contribute to these paradoxical voting patterns. First, people gauge their income through social comparison, and those comparisons are usually made to similar others. Second, people are insensitive to large numbers, which leads them to underestimate the gap between themselves and the very wealthy. These two tendencies can help explain why subjective income is normally distributed (therefore most people think they are middle class) and partly explain why many people who would benefit from redistribution oppose it. We support our model’s assumptions using survey data, a controlled experiment, and agent-based modeling. Our model sheds light on the cognitive barriers to reducing inequality.

General Discussion

These findings emphasize a new perspective on inequality. In addition to institutional drivers of inequality, our studies outline several cognitive constraints on people’s calculation of their support for wealth redistribution. By relying partly on subjective income to determine whether redistribution is in their interest, people leave themselves open to the effects of selective social comparison and insensitivity to large numbers. These cognitive tendencies help explain why most people believe they are middle class, occupying the middle of a bell-shaped distribution of SES, despite the extreme skew present in actual income distributions.

Both of these problems can potentially be mitigated. Accessible resources that help people learn whether they will benefit from wealth redistribution could help people select economic policies that are in their best interest. On a larger scale, reducing residential segregation or otherwise increasing inter-group contact across social class lines could facilitate more representative social comparisons, and more accurate judgments of economic self-interest. 

Attitudes about redistribution are not the only influences on people’s voting decisions and contribute to rising inequality. Institutional factors like gerrymandering may distort voting outcomes, and social factors such as moral and intergroup values may lead people to vote against their economic interests in favor of symbolic or group interests.

A pdf can be found here.

Italics added.

Thursday, September 17, 2020

Sensitivity to Ingroup and Outgroup Norms in the Association Between Commonality and Morality

M. R.Goldring & L. Heiphetz
Journal of Experimental Social Psychology
Volume 91, November 2020, 104025

Abstract

Emerging research suggests that people infer that common behaviors are moral and vice versa.
The studies presented here investigated the role of group membership in inferences regarding
commonality and morality. In Study 1, participants expected a target character to infer that
behaviors that were common among their ingroup were particularly moral. However, the extent
to which behaviors were common among the target character’s outgroup did not influence
expectations regarding perceptions of morality. Study 2 reversed this test, finding that
participants expected a target character to infer that behaviors considered moral among their
ingroup were particularly common, regardless of how moral their outgroup perceived those
behaviors to be. While Studies 1-2 relied on fictitious behaviors performed by novel groups,
Studies 3-4 generalized these results to health behaviors performed by members of different
racial groups. When answering from another person’s perspective (Study 3) and from their own
perspective (Study 4), participants reported that the more common behaviors were among their
ingroup, the more moral those behaviors were. This effect was significantly weaker for
perceptions regarding outgroup norms, although outgroup norms did exert some effect in this
real-world context. Taken together, these results highlight the complex integration of ingroup
and outgroup norms in socio-moral cognition.

A pdf of the article can be found here.

In sum: Actions that are common among the ingroup are seen as particularly moral.  But actions that are common among the outgroup have little bearing on our judgments of morality.

In this election, ‘costly signal deployment’

Christina Pazzanese
Harvard Gazette
Originally posted 15 Sept 20

Here is an excerpt:

GREENE:

Trump isn’t merely saying things that his base likes to hear. All politicians do that, and to the extent that they can do so honestly, that’s exactly what they are supposed to do. But Trump does more than this in his use of “costly signals.” A tattoo is a costly signal. You can tell your romantic partner that you love them, but there’s nothing stopping you from changing your mind the next day. But if you get a tattoo of your partner’s name, you’ve sent a much stronger signal about how committed you are. Likewise, a gang tattoo binds you to the gang, especially if it’s in a highly visible place such as the neck or the face. It makes you scary and unappealing to most people, limiting your social options, and thus, binding you to the gang. Trump’s blatant bigotry, misogyny, and incitements to violence make him completely unacceptable to liberals and moderates. And, thus, his comments function like gang tattoos. He’s not merely saying things that his supporters want to hear. By making himself permanently and unequivocally unacceptable to the opposition, he’s “proving” his loyalty to their side. This is why, I think, the Republican base trusts Trump like no other.

There is costly signaling on the left, but it’s not coming from Biden, who is trying to appeal to as many voters as possible. Bernie Sanders is a better example. Why does Bernie Sanders call himself a socialist? What he advocates does not meet the traditional dictionary definition of socialism. And politicians in Europe who hold similar views typically refer to themselves as “social democrats” rather than “democratic socialists.” “Socialism” has traditionally been a scare word in American politics. Conservatives use it as an epithet to describe policies such as the Affordable Care Act, which, ironically, is very much a market-oriented approach to achieving universal health insurance. It’s puzzling, then, that a politician would choose to describe himself with a scare word when he could accurately describe his views with less-scary words. But it makes sense if one thinks of this as a costly signal. By calling himself a socialist, Sanders makes it very clear where his loyalty lies, as vanishingly few Republicans would support someone who calls himself a socialist.