Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, September 25, 2020

Science can explain other people’s minds, but not mine: self-other differences in beliefs about science

André Mata, Cláudia Simão & Rogério Gouveia
(2020) DOI: 10.1080/15298868.2020.1791950

Abstract

Four studies show that people differ in their lay beliefs concerning the degree to which science can explain their mind and the minds of other people. In particular, people are more receptive to the idea that the psychology of other people is explainable by science than to the possibility of science explaining their own psychology. This self-other difference is moderated by the degree to which people associate a certain mental phenomenon with introspection. Moreover, this self-other difference has implications for the science-recommended products and practices that people choose for themselves versus others.

General discussion

These  studies  suggest  that  people  have  different  beliefs  regarding  what  science  can explain  about the  way they  think  versus  the  way  other  people  think.  Study 1 showed that,  in  general, people  see  science  as  better  able  to  explain  the  psychology  of other people than their own, and that this is particularly the case when a certain psychological phenomenon is highly associated with introspection (though there were other significant moderators  in this  study, and  results were  not consistent  across dependent  variables). Study 2 replicated  this interaction, whereby  science is seen as  having a greater explanatory  power  for  other  people  than  for  oneself,  but  that  this  is  only  the  case  when introspection is involved. Whereas Studies 1–2 provided correlational evidence,  Study 3 provided  an  experimental  test  of  the  role  of  introspection  in  self-other  differences  in thinking about science and  what it  can explain.  The results lent clear support to those of the previous  studies: For highly introspective phenomena, people believe that  science is better  at  making sense  of others than  of  themselves, whereas  this self-other  difference disappears  when introspection  is not  thought  to  be  involved.  Finally,  Study  4  demonstrated that this self-other difference has implications in terms of the choices that people make  for  themselves  and  how they  differ  from  the  choices that  they  advise others  to make.  In  particular, people  are  more reluctant  to  try certain  products  and  procedures targeted  at areas  of  their mental  life  that are  highly associated  with  introspection, but they are less reluctant to advise other people to try those same products and procedures. Lending  additional  support  to  the  role  of  introspection  in  generating  this  self-other difference,  this  choice-advice  asymmetry  was  not  observed  for  areas  that  were  not associated with  introspection.

A pdf can be downloaded here.

Thursday, September 24, 2020

A Failure of Empathy Led to 200,000 Deaths. It Has Deep Roots.

Olga Khazan
The Atlantic
Originally published 22 September 20

Here is an excerpt:

Indeed, doctors follow a similar logic. In a May paper in the New England Journal of Medicine, a group of doctors from different countries suggested that hospitals consider prioritizing younger patients if they are forced to ration ventilators. “Maximizing benefits requires consideration of prognosis—how long the patient is likely to live if treated—which may mean giving priority to younger patients and those with fewer coexisting conditions,” they wrote. Perhaps, on a global scale, we’ve internalized the idea that the young matter more than the old.

The Moral Machine is not without its criticisms. Some psychologists say that the trolley problem, a similar and more widely known moral dilemma, is too silly and unrealistic to say anything about our true ethics. In a response to the Moral Machine experiment, another group of researchers conducted a comparable study and found that people actually prefer to treat everyone equally, if given the option to do so. In other words, people didn’t want to kill the elderly; they just opted to do so over killing young people, when pressed. (In that experiment, though, people still would kill the criminals.) Shariff says these findings simply show that people don’t like dilemmas. Given the option, anyone would rather say “treat everybody equally,” just so they don’t have to decide.

Bolstering that view, in another recent paper, which has not yet been peer-reviewed, people preferred giving a younger hypothetical COVID-19 patient an in-demand ventilator rather than an older one. They did this even when they were told to imagine themselves as potentially being the older patient who would therefore be sacrificed. The participants were hidden behind a so-called veil of ignorance—told they had a “50 percent chance of being a 65-year-old who gets to live another 15 years, and a 50 percent chance of dying at age 25.” That prompt made the participants favor the young patient even more. When told to look at the situation objectively, saving young lives seemed even better.

Neural signatures of prosocial behaviors

Bellucci, G., Camilleri, J., and others
Neuroscience & Biobehavioral Reviews
Volume 118, November 2020, Pages 186-195

Abstract

Prosocial behaviors are hypothesized to require socio-cognitive and empathic abilities—engaging brain regions attributed to the mentalizing and empathy brain networks. Here, we tested this hypothesis with a coordinate-based meta-analysis of 600 neuroimaging studies on prosociality, mentalizing and empathy (∼12,000 individuals). We showed that brain areas recruited by prosocial behaviors only partially overlap with the mentalizing (dorsal posterior cingulate cortex) and empathy networks (middle cingulate cortex). Additionally, the dorsolateral and ventromedial prefrontal cortices were preferentially activated by prosocial behaviors. Analyses on the functional connectivity profile and functional roles of the neural patterns underlying prosociality revealed that in addition to socio-cognitive and empathic processes, prosocial behaviors further involve evaluation processes and action planning, likely to select the action sequence that best satisfies another person’s needs. By characterizing the multidimensional construct of prosociality at the neural level, we provide insights that may support a better understanding of normal and abnormal social cognition (e.g., psychopathy).

Highlights

• A psychological proposal posits prosociality engages brain regions of the mentalizing and empathy networks.

• Our meta-analysis provides only partial support to this proposal.

• Prosocial behaviors engage brain regions associated with socio-cognitive and empathic abilities.

• However, they also engage brain regions associated with evaluation and planning.

Conclusions

Taken together, we found a set of brain regions that were consistently activated by prosocial behaviors. These activation patterns partially overlapped with mentalizing and empathy brain regions, lending support to the hypothesis based on psychological research that socio-cognitive and empathic abilities are central to prosociality. However, we also found that the vmPFC and, in particular, the dlPFC were preferentially recruited by prosocial acts, suggesting that prosocial behaviors require the involvement of other important processes. Analyses on their functional connectivity profile and functional roles suggest that the vmPFC and dlPFC might be involved in valuation and planning of prosocial actions, respectively. These results clarify the role of mentalizing and empathic abilities in prosociality and provides useful insights into the neuropsychological processes underlying human social behaviors. For instance, they might help understand where and how things go awry in different neural and behavioral disorders such as psychopathy and antisocial behavior (Blair, 2007).

The research is here.

Wednesday, September 23, 2020

Do Conflict of Interest Disclosures Facilitate Public Trust?

D. M. Cain, & M. Banker
AMA J Ethics. 2020;22(3): E232-238.
doi: 10.1001/amajethics.2020.232.

Abstract

Lab experiments disagree on the efficacy of disclosure as a remedy to conflicts of interest (COIs). Some experiments suggest that disclosure has perverse effects, although others suggest these are mitigated by real-world factors (eg, feedback, sanctions, norms). This article argues that experiments reporting positive effects of disclosure often lack external validity: disclosure works best in lab experiments that make it unrealistically clear that the one disclosing is intentionally lying. We argue that even disclosed COIs remain dangerous in settings such as medicine where bias is often unintentional rather than the result of intentional corruption, and we conclude that disclosure might not be the panacea many seem to take it to be.

Introduction

While most medical professionals have the best intentions, conflicts of interest (COIs) can unintentionally bias their advice. For example, physicians might have consulting relationships with a company whose product they might prescribe. Physicians are increasingly required to limit COIs and disclose any that exist. When regulators decide whether to let a COI stand, the question becomes: How well does disclosure work? This paper reviews laboratory experiments that have had mixed results on the effects of disclosing COIs on bias and suggests that studies purporting to provide evidence of the efficacy of disclosure often lack external validity. We conclude that disclosure works more poorly than regulators hope; thus, COIs are more problematic than expected.

The info is here.

Tuesday, September 22, 2020

How to be an ethical scientist

W. A. Cunningham, J. J. Van Bavel,
& L. H. Somerville
Science Magazine
Originally posted 5 August 20

True discovery takes time, has many stops and starts, and is rarely neat and tidy. For example, news that the Higgs boson was finally observed in 2012 came 48 years after its original proposal by Peter Higgs. The slow pace of science helps ensure that research is done correctly, but it can come into conflict with the incentive structure of academic progress, as publications—the key marker of productivity in many disciplines—depend on research findings. Even Higgs recognized this problem with the modern academic system: “Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.”

It’s easy to forget about the “long view” when there is constant pressure to produce. So, in this column, we’re going to focus on the type of long-term thinking that advances science. For example, are you going to cut corners to get ahead, or take a slow, methodical approach? What will you do if your experiment doesn’t turn out as expected? Without reflecting on these deeper issues, we can get sucked into the daily goals necessary for success while failing to see the long-term implications of our actions.

Thinking carefully about these issues will not only impact your own career outcomes, but it can also impact others. Your own decisions and actions affect those around you, including your labmates, your collaborators, and your academic advisers. Our goal is to help you avoid pitfalls and find an approach that will allow you to succeed without impairing the broader goals of science.

Be open to being wrong

Science often advances through accidental (but replicable) findings. The logic is simple: If studies always came out exactly as you anticipated, then nothing new would ever be learned. Our previous theories of the world would be just as good as they ever were. This is why scientific discovery is often most profound when you stumble on something entirely new. Isaac Asimov put it best when he said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny ... .’”

The info is here.

Monday, September 21, 2020

The ethics of pausing a vaccine trial in the midst of a pandemic

Patrick Skerrett
statnews.com
Originally posted 11 Sept 20

Here is an excerpt:

Is the process for clinical trials of vaccines different from the process for drug or device trials?

Mostly no. The principles, design, and basic structure of a vaccine trial are more or less the same as for a trial for a new medication. The research ethics considerations are also similar.

The big difference between the two is that the participants in a preventive vaccine trial are, by and large, healthy people — or at least they are people who don’t have the illness for which the agent being tested might be effective. That significantly heightens the risk-benefit calculus for the participants.

Of course, some people in a Covid-19 vaccine trial could personally benefit if they live in communities with a lot of Covid-19. But even then, they might never get it. That’s very different than a trial in which individuals have a condition, say melanoma or malignant hypertension, and they are taking part in a trial of a therapy that could improve or even cure their condition.

Does that affect when a company might stop a trial?

In every clinical trial, the data and safety monitoring board takes routine and prescheduled looks at the accumulated data. They are checking mainly for two things: signals of harm and evidence of effectiveness.

These boards will recommend stopping a trial if they see a signal of concern or harm. They may do the same thing if they see solid evidence that people in the active arm of the trial are doing far better than those in the control arm.

In both cases, the action is taken on behalf of those participating in the trial. But it is also taken to advance the interests of people who would get this intervention if it was to be made publicly available.

The current situation with AstraZeneca involves a signal of concern. The company’s first obligation is to the participants in the trial. It cannot ethically proceed with the trial if there is reason for concern, even based on the experience of one participant.

Changing morals: we’re more compassionate than 100 years ago, but more judgmental too

N. Haslam, M. J. McGrady, & M. A. Wheeler
The Conversation
Originally published 4 March 19

Here is an excerpt:

Differently moral

We found basic moral terms (see the black line below) became dramatically scarcer in English-language books as the 20th century unfolded – which fits the de-moralisation narrative. But an equally dramatic rebound began in about 1980, implying a striking re-moralisation.

The five moral foundations, on the other hand, show a vastly changing trajectory. The purity foundation (green line) shows the same plunge and rebound as the basic moral terms. Ideas of sacredness, piety and purity, and of sin, desecration and indecency, fell until about 1980, and rose afterwards.

The other moralities show very different pathways. Perhaps surprisingly, the egalitarian morality of fairness (blue) showed no consistent rise or fall.

In contrast, the hierarchy-based morality of authority (grey) underwent a gentle decline for the first half of the century. It then sharply rose as the gathering crisis of authority shook the Western world in the late 1960s. This morality of obedience and conformity, insubordination and rebellion, then receded equally sharply through the 1970s.

Ingroup morality (orange), reflected in the communal language of loyalty and unity, insiders and outsiders, displays the clearest upward trend through the 20th century. Discernible bumps around the two world wars point to passing elevations in the “us and them” morality of threatened communities.

Finally, harm-based morality (red) presents a complex but intriguing trend. Its prominence falls from 1900 to the 1970s, interrupted by similar wartime bumps when themes of suffering and destruction became understandably urgent. But harm rises steeply from about 1980 in the absence of a single dominating global conflict.

The info is here.

Sunday, September 20, 2020

Financial Conflicts of Interest are of Higher Ethical Priority than “Intellectual” Conflicts of Interest

Goldberg, D.S.
Bioethical Inquiry 17, 217–227 (2020).
https://doi.org/10.1007/s11673-020-09989-4

Abstract

The primary claim of this paper is that intellectual conflicts of interest (COIs) exist but are of lower ethical priority than COIs flowing from relationships between health professionals and commercial industry characterized by financial exchange. The paper begins by defining intellectual COIs and framing them in the context of scholarship on non-financial COIs. However, the paper explains that the crucial distinction is not between financial and non-financial COIs but is rather between motivations for bias that flow from relationships and those that do not. While commitments to particular ideas or perspectives can cause all manner of cognitive bias, that fact does not justify denying the enormous power that relationships featuring pecuniary gain have on professional behaviour in term of care, policy, or both. Sufficient reason exists to take both intellectual COIs and financial COIs seriously, but this paper demonstrates why the latter is of higher ethical priority. Multiple reasons will be provided, but the primary rationale grounding the claim is that intellectual COIs may provide reasons to suspect cognitive bias but they do not typically involve a loss of trust in a social role. The same cannot be said for COIs flowing from relationships between health professionals and commercial industries involving financial exchange. The paper then assumes arguendo that the primary rationale is mistaken and proceeds to show why the claims that intellectual COIs are more significant than relationship-based COIs are dubious on their own merits. The final section of the paper summarizes and concludes.

Conclusion

iCOIs exist and they should be taken seriously. Nevertheless, fCOIs are of greater ethical priority. The latter diminish trust in a social role to a much greater extent than do the former, at least in the broad run of cases. Moreover, it is not clear how providers could avoid developing intellectual commitments and preferences regarding particular therapeutic modalities or interventions—and even if we could prevent this from occurring, it is far from evident that we should. We can easily imagine cases where a studied determination to remain neutral regarding interventions would be an abdication of moral responsibility, would be decidedly unvirtuous, and would likely result in harm to care- and service-seekers. While we also have evidence that some intellectual commitments can motivate bias in ways that likely result in harm to care- or service-seekers, this premise only justifies taking iCOIs seriously—it is literally no argument for deprioritizing fCOIs. Although the fact that iCOIs are in many cases unavoidable is a weak justification for ignoring iCOIs, the comparable avoidability of the vast majority of fCOIs is indeed a reason for prioritizing the latter over the former.

A pdf is here.

Saturday, September 19, 2020

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Pratyusha Kalluri
nature.com
Originally posted 7 July 20

Here is an excerpt:

Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. For example, the advocacy group Our Data Bodies is putting forward ways to protect personal data when interacting with US fair-housing and child-protection services. Such work gets little attention. Meanwhile, mainstream research is creating systems that are extraordinarily expensive to train, further empowering already powerful institutions, from Amazon, Google and Facebook to domestic surveillance and military programmes.

Many researchers have trouble seeing their intellectual work with AI as furthering inequity. Researchers such as me spend our days working on what are, to us, mathematically beautiful and useful systems, and hearing of AI success stories, such as winning Go championships or showing promise in detecting cancer. It is our responsibility to recognize our skewed perspective and listen to those impacted by AI.

Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful. Organizations have responded with pledges to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes mitigate harm, but are controlled by powerful institutions with their own agendas. At best, they are unreliable; at worst, they masquerade as ‘ethics-washing’ technologies that still perpetuate inequity.

Already, some researchers are exposing hidden limitations and failures of systems. They braid their research findings with advocacy for AI regulation. Their work includes critiquing inadequate technological ‘fixes’. Other researchers are explaining to the public how natural resources, data and human labour are extracted to create AI.

The info is here.