Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, February 25, 2021

For Biden Administration, Equity Initiatives Are A Moral Imperative

Juana Summers
Originally posted 6 Feb 21

Here is an excerpt:

Many of the Biden administration's early actions have had an equity through-line. For example, the executive actions that he signed last week include moves to strengthen anti-discrimination policies in housing, fighting back against racial animus toward Asian Americans and calling on the Justice Department to phase out its contracts with private prisons.

The early focus on equity is an attempt to account for differences in need among people with historically disadvantaged backgrounds. Civil rights leaders and activists have praised Biden's actions, though they have also made clear that they want to see more from Biden than just rhetoric.

"The work ahead will be operationalizing that, ensuring that equity doesn't just show up in speeches but it shows up in budgets. That equity isn't simply about restoring us back to policies from the Obama years, but about what is it going to take to move us forward," said Rashad Robinson, the president of the racial justice organization, Color of Change.

Susan Rice, the chair of Biden's Domestic Policy Council, made the case that there is a universal, concrete benefit to the equity policies Biden is championing.

"These aren't feel-good policies," Rice told reporters in the White House briefing room. "The evidence is clear. Investing in equity is good for economic growth, and it creates jobs for all Americans."

That echoes what Biden himself has said. He has linked the urgent equity focus of his administration to the fates of all Americans.

"This is time to act, and this is time to act because it's what the core values of this nation call us to do," he said. "And I believe that the vast majority of Americans — Democrats, Republicans and independents — share these values and want us to act as well."

Wednesday, February 24, 2021

The Moral Inversion of the Republican Party

Peter Wehner
The Atlantic
Originally posted 4 Feb 20

Here are two excerpts:

So how did the Republican Party end up in this dark place?

It’s complex, but surely part of the explanation rests with the base of the party, which today is composed of a significant number of people who are militant, inflamed, and tribalistic. They are populist, anti-institutional, and filled with grievances. They very nearly view politics as the war of all against all. And in far too many cases, they have entered a world of make-believe. That doesn’t describe the whole of the Republican Party’s grassroots movement, of course, but it describes a disturbingly large portion of it, and Republicans who hope to rebuild the party will get nowhere unless and until they acknowledge this. (Why the base has become radicalized is itself a tangled story.)

The base’s movement toward extremism preceded Trump, and inevitably complicated life for Republican lawmakers; they were understandably wary of speaking out in ways that would alienate their supporters, that would catalyze a primary challenge and might well cost them a general election. But that fear and reticence in the age of Trump—a man willing to cross any line, violate any standard, dehumanize any opponent—produced a catastrophe. In some significant respects, the GOP is a party that has been morally inverted.


Republicans can’t erase the past four years; with rare exceptions they were, to varying degrees, complicit in the Trump legacy—the lies, the lawlessness, the brutality of our politics, the wounds to our country. But there is the opportunity for Republicans in a post-Trump era to forge a different path, one that again places morality at the center of politics. Republicans can choose to live within the truth rather than within the lie, to stand for simple decency, to play a role in building a state that is reasonably humane and just. This starts with its political leadership, which needs to break some terribly bad habits, including thinking one thing and saying another. It starts with the courage to confront the maliciousness in its ranks rather than cater to it.

I don’t know if Republicans are up to the task right now, and I certainly understand those who doubt it. But there are plenty of people willing to help them try.

Tuesday, February 23, 2021

Mapping Principal Dimensions of Prejudice in the United States

R. Bergh & M. J. Brandt


Research  is often guided  by  maps  of  elementary  dimensions, such  as core  traits, foundations  of  morality,  and principal stereotype  dimensions. Yet  there is no comprehensive  map of prejudice dimensions. A major  limiter of  developing  a prejudice map is the ad hoc sampling of target groups. We used a broad and largely theory-agnostic  selection  of  groups  to  derive  a  map  of  principal dimensions of expressed prejudice in contemporary American society. Across a   series   of exploratory and confirmatory studies, we found three principal factors: Prejudice against marginalized groups, prejudice against privileged/conservative groups, and prejudice   against unconventional groups(with some inverse loadings for conservative groups). We documented distinct correlates foreach factor, in terms of social    identifications, perceived    threats, personality, and    behavioral manifestations. We discuss how the current map integrates several lines of research, and point to novel and underexplored insights about prejudice.


Concluding Remarks

Identifying distinct, broad domains of prejudice is important for the same reason as differentiating bacteria and viruses. While diseases may require very specific treatments, it is still helpful to know which broad category they fall in. Virtually all prejudice interventions to date are based on generic methods for changing mindsets based on “us” versus “them” (Paluck & Green, 2009). While value-based prejudice might fit with this kind of thinking (Cikara et al., 2017), that seems more questionable for biases based on status and power differences (Bergh et al., 2016).  For that reason, it would seem relevant to outline basic kinds of prejudice, and here we propose that there are three such factors, at least in the American context: Prejudice against privileged/conservative groups, prejudice against marginalized groups, and prejudice expressed toward either conventional or unconventional groups(inversely related).

With this research, we are not challenging research programs aimed to identify specific explanations for specific group evaluations (e.g., Cottrell & Neuberg, 2005; Mackie et al., 2000; Mackie & Smith, 2015). Yet, we believe it is important to also recognize that there are–in addition –clear and broad commonalities between prejudices toward different groups. Studying racism, sexism, and ageism as isolated phenomena, for instance, is missing a bigger picture–especially when the common features account for more than half of the individual variability in these attitudes (e.g., Bergh et al., 2012; Ekehammar & Akrami, 2003). In the current studies, we also showed that such commonalities are associated with broad patterns of behaviors: Those who were prejudiced against marginalized and unconventional groups were less likely to donate to in general, regardless if charity would benefit a conservative, unconventional or marginalized group cause. In other words, people who are generally prejudiced in the classic sense seem more self-serving (versus prosocial) in a fairly broad sense. Such findings are clearly complementary to specific, emotion-driven biases for understanding human behavior.

Monday, February 22, 2021

Anger Increases Susceptibility to Misinformation

Greenstein M, Franklin N. 
Exp Psychol. 2020 May;67(3):202-209. 


The effect of anger on acceptance of false details was examined using a three-phase misinformation paradigm. Participants viewed an event, were presented with schema-consistent and schema-irrelevant misinformation about it, and were given a surprise source monitoring test to examine the acceptance of the suggested material. Between each phase of the experiment, they performed a task that either induced anger or maintained a neutral mood. Participants showed greater susceptibility to schema-consistent than schema-irrelevant misinformation. Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger. In spite of this increase in source errors (i.e., misinformation acceptance), both confidence in the accuracy of source attributions and decision speed for incorrect judgments also increased with anger. Implications are discussed with respect to both the general effects of anger and real-world applications such as eyewitness memory.

Sunday, February 21, 2021

Moral Judgment as Categorization (MJAC)

McHugh, C., et al. 
(2019, September 17). 


Observed variability and complexity of judgments of 'right' and 'wrong' cannot currently be readily accounted for within extant approaches to understanding moral judgment. In response to this challenge we present a novel perspective on categorization in moral judgment. Moral judgment as categorization (MJAC) incorporates principles of category formation research while addressing key challenges to existing approaches to moral judgment. People develop skills in making context-relevant categorizations. That is, they learn that various objects (events, behaviors, people etc.) can be categorized as morally ‘right’ or ‘wrong’. Repetition and rehearsal results in reliable, habitualized categorizations. According to this skill formation account of moral categorization, the learning and the habitualization of the forming of moral categories, occurs within goal-directed activity that is sensitive to various contextual influences. By allowing for the complexity of moral judgments, MJAC offers greater explanatory power than existing approaches, while also providing opportunities for a diverse range of new research questions.


It is not terribly simple, the good guys are not always stalwart and true, and the bad guys are not easily distinguished by their pointy horns or black hats. Knowing right from wrong is not a simple process of applying an abstract principle to a particular situation. Decades of research in moral psychology have shown that our moral judgments can vary from one situation to the next, while a growing body of evidence indicates that people cannot always provide reasons for their moral judgments. Understanding the making of moral judgments requires accounting for the full complexity and variability of our moral judgments. MJAC provides a framework for studying moral judgment that incorporates this dynamism and context-dependency into its core assumptions. We have argued that this sensitivity to the dynamical and context-dependent nature of moral judgments provides MJAC with superior explanations for known moral phenomena while simultaneously providing MJAC with the power to explain a greater and more diverse range of phenomena than existing approaches.

Saturday, February 20, 2021

How ecstasy and psilocybin are shaking up psychiatry

Paul Tullis
Originally posted 27 Jan 21

Here is an excerpt:

Psychedelic-assisted psychotherapy could provide needed options for debilitating mental-health disorders including PTSD, major depressive disorder, alcohol-use disorder, anorexia nervosa and more that kill thousands every year in the United States, and cost billions worldwide in lost productivity.

But the strategies represent a new frontier for regulators. “This is unexplored ground as far as a formally evaluated intervention for a psychiatric disorder,” says Walter Dunn, a psychiatrist at the University of California, Los Angeles, who sometimes advises the US Food and Drug Administration (FDA) on psychiatric drugs. Most drugs that treat depression and anxiety can be picked up at a neighbourhood pharmacy. These new approaches, by contrast, use a powerful substance in a therapeutic setting under the close watch of a trained psychotherapist, and regulators and treatment providers will need to grapple with how to implement that safely.

“The clinical trials that have been reported on depression have been done under highly circumscribed and controlled conditions,” says Bertha Madras, a psychobiologist at Harvard Medical School who is based at McLean Hospital in Belmont, Massachusetts. That will make interpreting results difficult. A treatment might show benefits in a trial because the experience is carefully coordinated, and everyone is well trained. Placebo controls pose another challenge because the drugs have such powerful effects.

And there are risks. In extremely rare instances, psychedelics such as psilocybin and LSD can evoke a lasting psychotic reaction, more often in people with a family history of psychosis. Those with schizophrenia, for example, are excluded from trials involving psychedelics as a result. MDMA, moreover, is an amphetamine derivative, so could come with risks for abuse.

But many researchers are excited. Several trials show dramatic results: in a study published in November 2020, for example, 71% of people who took psilocybin for major depressive disorder showed a greater than 50% reduction in symptoms after four weeks, and half of the participants entered remission1. Some follow-up studies after therapy, although small, have shown lasting benefits2,3.

Friday, February 19, 2021

The Cognitive Science of Fake News

Pennycook, G., & Rand, D. G. 
(2020, November 18). 


We synthesize a burgeoning literature investigating why people believe and share “fake news” and other misinformation online. Surprisingly, the evidence contradicts a common narrative whereby partisanship and politically motivated reasoning explain failures to discern truth from falsehood. Instead, poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, and to the use of familiarity and other heuristics. Furthermore, there is a substantial disconnect between what people believe and what they will share on social media. This dissociation is largely driven by inattention, rather than purposeful sharing of misinformation. As a result, effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.

From the Discussion

Indeed, recent research shows that a simple accuracy nudge intervention –specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) prior to making judgments about social media sharing –improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a random headline were sent to thousands of accounts who recently shared links to misinformation sites. This subtle nudge significantly increased the quality of the content they subsequently shared; see Figure3B. Furthermore, survey experiments have shown that asking participants to explain how they know if a headline is true of false before sharing it increases sharing discernment, and having participants rate accuracy at the time of encoding protects against familiarity effects."

Thursday, February 18, 2021

Intuitive Expertise in Moral Judgements.

Wiegmann, A., & Horvath, J. 
(2020, December 22). 


According to the ‘expertise defence’, experimental findings which suggest that intuitive judgements about hypothetical cases are influenced by philosophically irrelevant factors do not undermine their evidential use in (moral) philosophy. This defence assumes that philosophical experts are unlikely to be influenced by irrelevant factors. We discuss relevant findings from experimental metaphilosophy that largely tell against this assumption. To advance the debate, we present the most comprehensive experimental study of intuitive expertise in ethics to date, which tests five well-known biases of judgement and decision-making among expert ethicists and laypeople. We found that even expert ethicists are affected by some of these biases, but also that they enjoy a slight advantage over laypeople in some cases. We discuss the implications of these results for the expertise defence, and conclude that they still do not support the defence as it is typically presented in (moral) philosophy.


We first considered the experimental restrictionist challenge to intuitions about cases, with a special focus on moral philosophy, and then introduced the expertise defence as the most popular reply. The expertise defence makes the empirically testable assumption that the case intuitions of expert philosophers are significantly less influenced by philosophically irrelevant factors than those of laypeople.  The upshot of our discussion of relevant findings from experimental metaphilosophy was twofold: first, extant findings largely tell against the expertise defence, and second, the number of published studies and investigated biases is still fairly small. To advance the debate about the expertise defencein moral philosophy, we thus tested five well-known biases of judgement and decision-making among expert ethicists and laypeople. Averaged across all biases and scenarios, the intuitive judgements of both experts and laypeople were clearly susceptible to bias. However, moral philosophers were also less biased in two of the five cases(Focus and Prospect), although we found no significant expert-lay differences in the remaining three cases.

In comparison to previous findings (for example Schwitzgebel and Cushman [2012, 2015]; Wiegmann et al. [2020]), our results appear to be relatively good news for the expertise defence, because they suggest that moral philosophers are less influenced by some morally irrelevant factors, such as a simple saving/killing framing. On the other hand, our study does not support the very general armchair versions of the expertise defence that one often finds in metaphilosophy, which try to reassure(moral) philosophers that they need not worry about the influence of philosophically irrelevant factors.At best, however, we need not worry about just a few cases and a few human biases—and even that modest hypothesis can only be upheld on the basis of sufficient empirical research.

Wednesday, February 17, 2021

Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems

Heersmink, R. 
Sci Eng Ethics 23, 431–448 (2017). 


There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.


Both Floridi and Verbeek argue that moral actions, either positive or negative, can be the result of interactions between humans and technology, giving artifacts a much more prominent role in ethical theory than most philosophers have. They both develop a non-anthropocentric systems approach to morality. Floridi focuses on large-scale ‘‘multiagent systems’’, whereas Verbeek focuses on small-scale ‘‘human–technology associations’’. But both attribute morality or moral agency to systems comprising of humans and technological artifacts. On their views, moral agency is thus a system property and not found exclusively in human agents. Does this mean that the artifacts and software programs involved in the process have moral agency? Neither of them attribute moral agency to the artifactual components of the larger system. It is not inconsistent to say that the human-artifact system has moral agency without saying that its artifactual components have moral agency.  Systems often have different properties than their components. The difference between Floridi and Verbeek’s approach roughly mirrors the difference between distributed and extended cognition, in that Floridi and distributed cognition theory focus on large-scale systems without central controllers, whereas Verbeek and extended cognition theory focus on small-scale systems in which agents interact with and control an informational artifact. In Floridi’s example, the technology seems semi-autonomous: the software and computer systems automatically do what they are designed to do. Presumably, the money is automatically transferred to Oxfam, implying that technology is a mere cog in a larger socio-technical system that realises positive moral outcomes. There seems to be no central controller in this system: it is therefore difficult to see it as an extended agency whose intentions are being realised.