Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, February 28, 2021

How peer influence shapes value computation in moral decision-making

Yu, H., Siegel, J., Clithero, J., & Crockett, M. 
(2021, January 16).


Moral behavior is susceptible to peer influence. How does information from peers influence moral preferences? We used drift-diffusion modeling to show that peer influence changes the value of moral behavior by prioritizing the choice attributes that align with peers’ goals. Study 1 (N = 100; preregistered) showed that participants accurately inferred the goals of prosocial and antisocial peers when observing their moral decisions. In Study 2 (N = 68), participants made moral decisions before and after observing the decisions of a prosocial or antisocial peer. Peer observation caused participants’ own preferences to resemble those of their peers. This peer influence effect on value computation manifested as an increased weight on choice attributes promoting the peers’ goals that occurred independently from peer influence on initial choice bias. Participants’ self-reported awareness of influence tracked more closely with computational measures of prosocial than antisocial influence. Our findings have implications for bolstering and blocking the effects of prosocial and antisocial influence on moral behavior.

Saturday, February 27, 2021

Following your group or your morals? The in-group promotes immoral behavior while the out-group buffers against it

Vives, M., Cikara, M., & FeldmanHall, O. 
(2021, February 5). 


People learn by observing others, albeit not uniformly. Witnessing an immoral behavior causes observers to commit immoral actions, especially when the perpetrator is part of the in-group. Does conformist behavior hold when observing the out-group? We conducted three experiments (N=1,358) exploring how observing an (im)moral in-/out-group member changed decisions relating to justice: Punitive, selfish, or dishonest choices. Only immoral in-groups increased immoral actions, while the same immoral behavior from out-groups had no effect. In contrast, a compassionate or generous individual did not make people more moral, regardless of group membership. When there was a loophole to deny cheating, neither an immoral in-/out-group member changed dishonest behavior. Compared to observing an honest in-group member, people become more honest themselves after observing an honest out-group member, revealing that out-groups can enhance morality. Depending on the severity of the moral action, the in-group licenses immoral behavior while the out-group buffers against it.

General discussion

Choosing compassion over punishment, generosity over selfishness, and honesty over dishonesty is the byproduct of many factors, including virtue-signaling, norm compliance, and self-interest. There are times, however, when moral choices are shaped by the mere observation of what others do in the same situation (Gino & Galinsky, 2012; Nook et al., 2016). Here, we investigated how moral decisions are shaped by one’s in-or out-group—a factor known to shift willingness to conform (Gino et al., 2009). Conceptually replicating past research (Gino et al., 2009), results reveal that immoral behaviors were only transmitted by the in-group: while participants became more punitive or selfish after observing a punitive or selfish in-group, they did not increase their immoral behavior after observing an immoral out-group (Experiments 1 & 2). However, when the same manipulation was deployed in a context where the immoral acts could not be traced, neither the dishonest in- nor out-group member produced any behavioral shifts in our subjects (Experiment 3). These results suggest that immoral behaviors are not transmitted equally by all individuals. Rather, they are more likely to be transmitted within groups than between groups. In contrast, pro-social behaviors were rarely transmitted by either group. Participants did not become more compassionate or generous after observing a compassionate or generous in-or out-group member (Experiments 1 & 2). We only find modifications for prosocial behavior when participants observe another participant behaving in a costly honest manner, and this was modulated by group membership. Witnessing an honest out-group member attenuated the degree to which participants themselves cheated compared to participants who witnessed an honest in-group member (see Table 1 for a summary of results). Together, these findings suggest that the transmission of moral corruption is both determined by group membership and is sensitive to the degree of moral transgression. Namely, given the findings from Experiment 3, in-groups appear to license moral corruption, while virtuous out-groups can buffer against it.

(Italics added.)

Friday, February 26, 2021

Supported Decision Making With People at the Margins of Autonomy

A. Peterson, J. Karlawish & E. Largent (2020) 
The American Journal of Bioethics
DOI: 10.1080/15265161.2020.1863507


This article argues that supported decision making is ideal for people with dynamic cognitive and functional impairments that place them at the margins of autonomy. First, we argue that guardianship and similar surrogate decision-making frameworks may be inappropriate for people with dynamic impairments. Second, we provide a conceptual foundation for supported decision making for individuals with dynamic impairments, which integrates the social model of disability with relational accounts of autonomy. Third, we propose a three-step model that specifies the necessary conditions of supported decision making: identifying domains for support; identifying kinds of supports; and reaching a mutually acceptable and formal agreement. Finally, we identify a series of challenges for supported decision making, provide preliminary responses, and highlight avenues for future bioethics research.

Here is an excerpt:

Are Beneficiaries Authorized to Enter into a Supported Decision-Making Agreement?

The need for supported decision making implies that a beneficiary has diminished decision-making capacity. But there is a presumption that she is still capable to enter into a supported decision-making agreement. What justifies this presumption?

One way to address this challenge is to distinguish the capacity to enter into a supported decision-making agreement from the capacity to make the kinds of decisions enumerated in the agreement. For example, it is recognized in U.S. law that people who lack capacity to make medical decisions at the end of life may still have capacity to assign a surrogate decision maker (Kim and Appelbaum 2006). This practice is justified because the threshold of capacity required to appoint a surrogate is lower than that to consent to more complex decisions. Similarly, the kinds of decisions enumerated in supported decision-making agreements will often be complex and could result in unfortunate consequences if poor decisions are made. But the decision to enter into a supported decision-making agreement is relatively less complex. Moreover, these agreements are often formalizations of ongoing, trusting relationships with friends and family intended to enhance a beneficiary’s wellbeing. Thus, the threshold of capacity to enter into a supported decision-making agreement is justifiably low. People with marginal capacity would reasonably satisfy this threshold.

This response, however, raises questions about the minimum level of decision-making capacity required to enter into a supported decision-making agreement. The project of supported decision making would benefit from future scholarship that describes the specific decisional abilities that show a person with dynamic impairments can (or cannot) enter into a valid supported decision-making agreement.

Thursday, February 25, 2021

For Biden Administration, Equity Initiatives Are A Moral Imperative

Juana Summers
Originally posted 6 Feb 21

Here is an excerpt:

Many of the Biden administration's early actions have had an equity through-line. For example, the executive actions that he signed last week include moves to strengthen anti-discrimination policies in housing, fighting back against racial animus toward Asian Americans and calling on the Justice Department to phase out its contracts with private prisons.

The early focus on equity is an attempt to account for differences in need among people with historically disadvantaged backgrounds. Civil rights leaders and activists have praised Biden's actions, though they have also made clear that they want to see more from Biden than just rhetoric.

"The work ahead will be operationalizing that, ensuring that equity doesn't just show up in speeches but it shows up in budgets. That equity isn't simply about restoring us back to policies from the Obama years, but about what is it going to take to move us forward," said Rashad Robinson, the president of the racial justice organization, Color of Change.

Susan Rice, the chair of Biden's Domestic Policy Council, made the case that there is a universal, concrete benefit to the equity policies Biden is championing.

"These aren't feel-good policies," Rice told reporters in the White House briefing room. "The evidence is clear. Investing in equity is good for economic growth, and it creates jobs for all Americans."

That echoes what Biden himself has said. He has linked the urgent equity focus of his administration to the fates of all Americans.

"This is time to act, and this is time to act because it's what the core values of this nation call us to do," he said. "And I believe that the vast majority of Americans — Democrats, Republicans and independents — share these values and want us to act as well."

Wednesday, February 24, 2021

The Moral Inversion of the Republican Party

Peter Wehner
The Atlantic
Originally posted 4 Feb 20

Here are two excerpts:

So how did the Republican Party end up in this dark place?

It’s complex, but surely part of the explanation rests with the base of the party, which today is composed of a significant number of people who are militant, inflamed, and tribalistic. They are populist, anti-institutional, and filled with grievances. They very nearly view politics as the war of all against all. And in far too many cases, they have entered a world of make-believe. That doesn’t describe the whole of the Republican Party’s grassroots movement, of course, but it describes a disturbingly large portion of it, and Republicans who hope to rebuild the party will get nowhere unless and until they acknowledge this. (Why the base has become radicalized is itself a tangled story.)

The base’s movement toward extremism preceded Trump, and inevitably complicated life for Republican lawmakers; they were understandably wary of speaking out in ways that would alienate their supporters, that would catalyze a primary challenge and might well cost them a general election. But that fear and reticence in the age of Trump—a man willing to cross any line, violate any standard, dehumanize any opponent—produced a catastrophe. In some significant respects, the GOP is a party that has been morally inverted.


Republicans can’t erase the past four years; with rare exceptions they were, to varying degrees, complicit in the Trump legacy—the lies, the lawlessness, the brutality of our politics, the wounds to our country. But there is the opportunity for Republicans in a post-Trump era to forge a different path, one that again places morality at the center of politics. Republicans can choose to live within the truth rather than within the lie, to stand for simple decency, to play a role in building a state that is reasonably humane and just. This starts with its political leadership, which needs to break some terribly bad habits, including thinking one thing and saying another. It starts with the courage to confront the maliciousness in its ranks rather than cater to it.

I don’t know if Republicans are up to the task right now, and I certainly understand those who doubt it. But there are plenty of people willing to help them try.

Tuesday, February 23, 2021

Mapping Principal Dimensions of Prejudice in the United States

R. Bergh & M. J. Brandt


Research  is often guided  by  maps  of  elementary  dimensions, such  as core  traits, foundations  of  morality,  and principal stereotype  dimensions. Yet  there is no comprehensive  map of prejudice dimensions. A major  limiter of  developing  a prejudice map is the ad hoc sampling of target groups. We used a broad and largely theory-agnostic  selection  of  groups  to  derive  a  map  of  principal dimensions of expressed prejudice in contemporary American society. Across a   series   of exploratory and confirmatory studies, we found three principal factors: Prejudice against marginalized groups, prejudice against privileged/conservative groups, and prejudice   against unconventional groups(with some inverse loadings for conservative groups). We documented distinct correlates foreach factor, in terms of social    identifications, perceived    threats, personality, and    behavioral manifestations. We discuss how the current map integrates several lines of research, and point to novel and underexplored insights about prejudice.


Concluding Remarks

Identifying distinct, broad domains of prejudice is important for the same reason as differentiating bacteria and viruses. While diseases may require very specific treatments, it is still helpful to know which broad category they fall in. Virtually all prejudice interventions to date are based on generic methods for changing mindsets based on “us” versus “them” (Paluck & Green, 2009). While value-based prejudice might fit with this kind of thinking (Cikara et al., 2017), that seems more questionable for biases based on status and power differences (Bergh et al., 2016).  For that reason, it would seem relevant to outline basic kinds of prejudice, and here we propose that there are three such factors, at least in the American context: Prejudice against privileged/conservative groups, prejudice against marginalized groups, and prejudice expressed toward either conventional or unconventional groups(inversely related).

With this research, we are not challenging research programs aimed to identify specific explanations for specific group evaluations (e.g., Cottrell & Neuberg, 2005; Mackie et al., 2000; Mackie & Smith, 2015). Yet, we believe it is important to also recognize that there are–in addition –clear and broad commonalities between prejudices toward different groups. Studying racism, sexism, and ageism as isolated phenomena, for instance, is missing a bigger picture–especially when the common features account for more than half of the individual variability in these attitudes (e.g., Bergh et al., 2012; Ekehammar & Akrami, 2003). In the current studies, we also showed that such commonalities are associated with broad patterns of behaviors: Those who were prejudiced against marginalized and unconventional groups were less likely to donate to in general, regardless if charity would benefit a conservative, unconventional or marginalized group cause. In other words, people who are generally prejudiced in the classic sense seem more self-serving (versus prosocial) in a fairly broad sense. Such findings are clearly complementary to specific, emotion-driven biases for understanding human behavior.

Monday, February 22, 2021

Anger Increases Susceptibility to Misinformation

Greenstein M, Franklin N. 
Exp Psychol. 2020 May;67(3):202-209. 


The effect of anger on acceptance of false details was examined using a three-phase misinformation paradigm. Participants viewed an event, were presented with schema-consistent and schema-irrelevant misinformation about it, and were given a surprise source monitoring test to examine the acceptance of the suggested material. Between each phase of the experiment, they performed a task that either induced anger or maintained a neutral mood. Participants showed greater susceptibility to schema-consistent than schema-irrelevant misinformation. Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger. In spite of this increase in source errors (i.e., misinformation acceptance), both confidence in the accuracy of source attributions and decision speed for incorrect judgments also increased with anger. Implications are discussed with respect to both the general effects of anger and real-world applications such as eyewitness memory.

Sunday, February 21, 2021

Moral Judgment as Categorization (MJAC)

McHugh, C., et al. 
(2019, September 17). 


Observed variability and complexity of judgments of 'right' and 'wrong' cannot currently be readily accounted for within extant approaches to understanding moral judgment. In response to this challenge we present a novel perspective on categorization in moral judgment. Moral judgment as categorization (MJAC) incorporates principles of category formation research while addressing key challenges to existing approaches to moral judgment. People develop skills in making context-relevant categorizations. That is, they learn that various objects (events, behaviors, people etc.) can be categorized as morally ‘right’ or ‘wrong’. Repetition and rehearsal results in reliable, habitualized categorizations. According to this skill formation account of moral categorization, the learning and the habitualization of the forming of moral categories, occurs within goal-directed activity that is sensitive to various contextual influences. By allowing for the complexity of moral judgments, MJAC offers greater explanatory power than existing approaches, while also providing opportunities for a diverse range of new research questions.


It is not terribly simple, the good guys are not always stalwart and true, and the bad guys are not easily distinguished by their pointy horns or black hats. Knowing right from wrong is not a simple process of applying an abstract principle to a particular situation. Decades of research in moral psychology have shown that our moral judgments can vary from one situation to the next, while a growing body of evidence indicates that people cannot always provide reasons for their moral judgments. Understanding the making of moral judgments requires accounting for the full complexity and variability of our moral judgments. MJAC provides a framework for studying moral judgment that incorporates this dynamism and context-dependency into its core assumptions. We have argued that this sensitivity to the dynamical and context-dependent nature of moral judgments provides MJAC with superior explanations for known moral phenomena while simultaneously providing MJAC with the power to explain a greater and more diverse range of phenomena than existing approaches.

Saturday, February 20, 2021

How ecstasy and psilocybin are shaking up psychiatry

Paul Tullis
Originally posted 27 Jan 21

Here is an excerpt:

Psychedelic-assisted psychotherapy could provide needed options for debilitating mental-health disorders including PTSD, major depressive disorder, alcohol-use disorder, anorexia nervosa and more that kill thousands every year in the United States, and cost billions worldwide in lost productivity.

But the strategies represent a new frontier for regulators. “This is unexplored ground as far as a formally evaluated intervention for a psychiatric disorder,” says Walter Dunn, a psychiatrist at the University of California, Los Angeles, who sometimes advises the US Food and Drug Administration (FDA) on psychiatric drugs. Most drugs that treat depression and anxiety can be picked up at a neighbourhood pharmacy. These new approaches, by contrast, use a powerful substance in a therapeutic setting under the close watch of a trained psychotherapist, and regulators and treatment providers will need to grapple with how to implement that safely.

“The clinical trials that have been reported on depression have been done under highly circumscribed and controlled conditions,” says Bertha Madras, a psychobiologist at Harvard Medical School who is based at McLean Hospital in Belmont, Massachusetts. That will make interpreting results difficult. A treatment might show benefits in a trial because the experience is carefully coordinated, and everyone is well trained. Placebo controls pose another challenge because the drugs have such powerful effects.

And there are risks. In extremely rare instances, psychedelics such as psilocybin and LSD can evoke a lasting psychotic reaction, more often in people with a family history of psychosis. Those with schizophrenia, for example, are excluded from trials involving psychedelics as a result. MDMA, moreover, is an amphetamine derivative, so could come with risks for abuse.

But many researchers are excited. Several trials show dramatic results: in a study published in November 2020, for example, 71% of people who took psilocybin for major depressive disorder showed a greater than 50% reduction in symptoms after four weeks, and half of the participants entered remission1. Some follow-up studies after therapy, although small, have shown lasting benefits2,3.

Friday, February 19, 2021

The Cognitive Science of Fake News

Pennycook, G., & Rand, D. G. 
(2020, November 18). 


We synthesize a burgeoning literature investigating why people believe and share “fake news” and other misinformation online. Surprisingly, the evidence contradicts a common narrative whereby partisanship and politically motivated reasoning explain failures to discern truth from falsehood. Instead, poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, and to the use of familiarity and other heuristics. Furthermore, there is a substantial disconnect between what people believe and what they will share on social media. This dissociation is largely driven by inattention, rather than purposeful sharing of misinformation. As a result, effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.

From the Discussion

Indeed, recent research shows that a simple accuracy nudge intervention –specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) prior to making judgments about social media sharing –improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a random headline were sent to thousands of accounts who recently shared links to misinformation sites. This subtle nudge significantly increased the quality of the content they subsequently shared; see Figure3B. Furthermore, survey experiments have shown that asking participants to explain how they know if a headline is true of false before sharing it increases sharing discernment, and having participants rate accuracy at the time of encoding protects against familiarity effects."

Thursday, February 18, 2021

Intuitive Expertise in Moral Judgements.

Wiegmann, A., & Horvath, J. 
(2020, December 22). 


According to the ‘expertise defence’, experimental findings which suggest that intuitive judgements about hypothetical cases are influenced by philosophically irrelevant factors do not undermine their evidential use in (moral) philosophy. This defence assumes that philosophical experts are unlikely to be influenced by irrelevant factors. We discuss relevant findings from experimental metaphilosophy that largely tell against this assumption. To advance the debate, we present the most comprehensive experimental study of intuitive expertise in ethics to date, which tests five well-known biases of judgement and decision-making among expert ethicists and laypeople. We found that even expert ethicists are affected by some of these biases, but also that they enjoy a slight advantage over laypeople in some cases. We discuss the implications of these results for the expertise defence, and conclude that they still do not support the defence as it is typically presented in (moral) philosophy.


We first considered the experimental restrictionist challenge to intuitions about cases, with a special focus on moral philosophy, and then introduced the expertise defence as the most popular reply. The expertise defence makes the empirically testable assumption that the case intuitions of expert philosophers are significantly less influenced by philosophically irrelevant factors than those of laypeople.  The upshot of our discussion of relevant findings from experimental metaphilosophy was twofold: first, extant findings largely tell against the expertise defence, and second, the number of published studies and investigated biases is still fairly small. To advance the debate about the expertise defencein moral philosophy, we thus tested five well-known biases of judgement and decision-making among expert ethicists and laypeople. Averaged across all biases and scenarios, the intuitive judgements of both experts and laypeople were clearly susceptible to bias. However, moral philosophers were also less biased in two of the five cases(Focus and Prospect), although we found no significant expert-lay differences in the remaining three cases.

In comparison to previous findings (for example Schwitzgebel and Cushman [2012, 2015]; Wiegmann et al. [2020]), our results appear to be relatively good news for the expertise defence, because they suggest that moral philosophers are less influenced by some morally irrelevant factors, such as a simple saving/killing framing. On the other hand, our study does not support the very general armchair versions of the expertise defence that one often finds in metaphilosophy, which try to reassure(moral) philosophers that they need not worry about the influence of philosophically irrelevant factors.At best, however, we need not worry about just a few cases and a few human biases—and even that modest hypothesis can only be upheld on the basis of sufficient empirical research.

Wednesday, February 17, 2021

Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems

Heersmink, R. 
Sci Eng Ethics 23, 431–448 (2017). 


There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.


Both Floridi and Verbeek argue that moral actions, either positive or negative, can be the result of interactions between humans and technology, giving artifacts a much more prominent role in ethical theory than most philosophers have. They both develop a non-anthropocentric systems approach to morality. Floridi focuses on large-scale ‘‘multiagent systems’’, whereas Verbeek focuses on small-scale ‘‘human–technology associations’’. But both attribute morality or moral agency to systems comprising of humans and technological artifacts. On their views, moral agency is thus a system property and not found exclusively in human agents. Does this mean that the artifacts and software programs involved in the process have moral agency? Neither of them attribute moral agency to the artifactual components of the larger system. It is not inconsistent to say that the human-artifact system has moral agency without saying that its artifactual components have moral agency.  Systems often have different properties than their components. The difference between Floridi and Verbeek’s approach roughly mirrors the difference between distributed and extended cognition, in that Floridi and distributed cognition theory focus on large-scale systems without central controllers, whereas Verbeek and extended cognition theory focus on small-scale systems in which agents interact with and control an informational artifact. In Floridi’s example, the technology seems semi-autonomous: the software and computer systems automatically do what they are designed to do. Presumably, the money is automatically transferred to Oxfam, implying that technology is a mere cog in a larger socio-technical system that realises positive moral outcomes. There seems to be no central controller in this system: it is therefore difficult to see it as an extended agency whose intentions are being realised.

Tuesday, February 16, 2021

Strategic Regulation of Empathy

Weisz, E., & Cikara, M. 
(2020, October 9).


Empathy is an integral part of socio-emotional well-being, yet recent research has highlighted some of its downsides. Here we examine literature that establishes when, how much, and what aspects of empathy promote specific outcomes. After reviewing a theoretical framework which characterizes empathy as a suite of separable components, we examine evidence showing how dissociations of these components affect important socio-emotional outcomes and describe emerging evidence suggesting that these components can be independently and deliberately modulated. Finally, we advocate for a new approach to a multi-component view of empathy which accounts for the interrelations among components. This perspective advances scientific conceptualization of empathy and offers suggestions for tailoring empathy to help people realize their social, emotional, and occupational goals.

From Concluding Remarks

Early research on empathy regarded it as a monolithic construct. This characterization ultimately gave rise to a second wave of empathy-related research, which explicitly examined dissociations among empathy-related components.Subsequently, researchers noticed that individual components held different predictive power over key outcomes such as helping and occupational burnout. As described above, however, there are many instances in which these components track together in the real world, suggesting that although they can dissociate, they often operate in tandem.

Because empathy-related components rely on separable neural systems, the field of social neuroscience has already made significant progress toward the goal of characterizing instances when components do (or do not) track together.  For example, although affective and cognitive channels can independently contribute to judgments of others emotional states, they also operate in synchrony during more naturalistic socio-emotional tasks.  However, far more behavioral research is needed to characterize the co-occurrence of components in people’s everyday social interactions.  Because people differ in their tendencies to engage distinct components of empathy, a better understanding of the separability and interrelations of these components in real-world social scenarios can help tailor empathy-training programs to promote desirable outcomes.  Empathy-training efforts are on average effective (Hedges’ g = 0.51) but generally intervene on empathy as a whole (rather than specific components). 

Monday, February 15, 2021

Response time modelling reveals evidence for multiple, distinct sources of moral decision caution

Andrejević, M., et al. 
(2020, November 13). 


People are often cautious in delivering moral judgments of others’ behaviours, as falsely accusing others of wrongdoing can be costly for social relationships. Caution might further be present when making judgements in information-dynamic environments, as contextual updates can change our minds. This study investigated the processes with which moral valence and context expectancy drive caution in moral judgements. Across two experiments, participants (N = 122) made moral judgements of others’ sharing actions. Prior to judging, participants were informed whether contextual information regarding the deservingness of the recipient would follow. We found that participants slowed their moral judgements when judging negatively valenced actions and when expecting contextual updates. Using a diffusion decision model framework, these changes were explained by shifts in drift rate and decision bias (valence) and boundary setting (context), respectively. These findings demonstrate how moral decision caution can be decomposed into distinct aspects of the unfolding decision process.

From the Discussion

Our findings that participants slowed their judgments when expecting contextual information is consistent with previous research showing that people are more cautious when aware that they are more prone to making mistakes. Notably, previous research has demonstrated this effect for decision mistakes in tasks in which people are not given additional information or a chance to change their minds.The current findings show that this effect also extends to dynamic decision-making contexts, in which learning additional information can lead to changes of mind. Crucially, here we show that this type of caution can be explained by the widening of the decision boundary separation in a process model of decision-making.

Sunday, February 14, 2021

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Frank, L., Nyholm, S. 
Artif Intell Law 25, 305–323 (2017).


The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

Here is an excerpt:

Here, we want to ask a similar question regarding how and whether sex robots should be brought into the legal community. Our overarching question is: is it conceivable, possible, and desirable to create autonomous and smart sex robots that are able to give (or withhold) consent to sex with a human person? For each of these three sub-questions (whether it is conceivable, possible, and desirable to create sex robots that can consent) we consider both “no” and “yes” answers. We are here mainly interested in exploring these questions in general terms and motivating further discussion. However, in discussing each of these sub-questions we will argue that, prima facie, the “yes” answers appear more convincing than the “no” answers—at least if the sex robots are of a highly sophisticated sort.Footnote4

The rest of our discussion divides into the following sections. We start by saying a little more about what we understand by a “sex robot”. We also say more about what consent is, and we review the small literature that is starting to emerge on our topic (Sect. 1). We then turn to the questions of whether it is conceivable, possible, and desirable to create sex robots capable of giving consent—and discuss “no” and “yes” answers to all of these questions. When we discuss the case for considering it desirable to require robotic consent to sex, we argue that there can be both non-instrumental and instrumental reasons in favor of such a requirement (Sects. 2–4). We conclude with a brief summary (Sect. 5).

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 


Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.


• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Friday, February 12, 2021

Measuring Implicit Intergroup Biases.

Lai, C. K., & Wilson, M. 
(2020, December 9).


Implicit intergroup biases are automatically activated prejudices and stereotypes that may influence judgments of others on the basis of group membership. We review evidence on the measurement of implicit intergroup biases, finding: implicit intergroup biases reflect the personal and the cultural, implicit measures vary in reliability and validity, and implicit measures vary greatly in their prediction of explicit and behavioral outcomes due to theoretical and methodological moderators. We then discuss three challenges to the application of implicit intergroup biases to real‐world problems: (1) a lack of research on social groups of scientific and public interest, (2) developing implicit measures with diagnostic capabilities, and (3) resolving ongoing ambiguities in the relationship between implicit bias and behavior. Making progress on these issues will clarify the role of implicit intergroup biases in perpetuating inequality.


Predictive Validity

Implicit intergroup biases are predictive of explicit biases,  behavioral outcomes,  and regional differences in inequality. 

Relationship to explicit prejudice & stereotypes. 

The relationship  between implicit and explicit measures of intergroup bias is consistently positive, but the size  of the relationship depends on the topic.  In a large-scale study of 57 attitudes (Nosek, 2005), the relationship between IAT scores and explicit intergroup attitudes was as high as r= .59 (Democrats vs. Republicans) and as low as r= .33 (European Americans vs. African Americans) or r = .10 (Thin people vs. Fat people). Generally, implicit-explicit relations are lower in studies on intergroup topics than in other topics (Cameron et al., 2012; Greenwald et al., 2009).The  strength  of  the  relationship  between  implicit  and explicit  intergroup  biases  is  moderated  by  factors which have been documented in one large-scale study and  several meta-analyses   (Cameron et al., 2012; Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005; Oswald et al., 2013). Much of this work has focused  on  the  IAT,  finding  that  implicit-explicit  relations  are  stronger  when  the  attitude  is  more  strongly elaborated, perceived as distinct from other people, has a  bipolar  structure  (i.e.,  liking  for  one  group  implies disliking  of  the  other),  and  the  explicit  measure  assesses a relative preference rather than an absolute preference (Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005).

Note: If you are a healthcare professional, you need to be aware of these biases.

Thursday, February 11, 2021

Paranoia and Belief Updating During a Crisis

Suthaharan, P., Reed, E.,  et al. 
(2020, September 4). 


The 2019 coronavirus (COVID-19) pandemic has made the world seem unpredictable. During such crises we can experience concerns that others might be against us, culminating perhaps in paranoid conspiracy theories. Here, we investigate paranoia and belief updating in an online sample (N=1,010) in the United States of America (U.S.A). We demonstrate the pandemic increased individuals’ self-rated paranoia and rendered their task-based belief updating more erratic. Local lockdown and reopening policies, as well as culture more broadly, markedly influenced participants’ belief-updating: an early and sustained lockdown rendered people’s belief updating less capricious. Masks are clearly an effective public health measure against COVID-19. However, state-mandated mask wearing increased paranoia and induced more erratic behaviour. Remarkably, this was most evident in those states where adherence to mask wearing rules was poor but where rule following is typically more common. This paranoia may explain the lack of compliance with this simple and effective countermeasure. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable, but at the same time predicted more rewards. In a follow-up study we found people who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines – again, mask attitude and conspiratorial beliefs were associated with erratic task behaviour and changed priors. Future public health responses to the pandemic might leverage these observations, mollifying paranoia and increasing adherence by tempering people’s expectations of other’s behaviour, and the environment more broadly, and reinforcing compliance.

Wednesday, February 10, 2021

Beyond Moral Dilemmas: The Role of Reasoning in Five Categories of Utilitarian Judgment

F. Jaquet & F. Cova
Cognition, Volume 209, 
April 2021, 104572


Over the past two decades, the study of moral reasoning has been heavily influenced by Joshua Greene’s dual-process model of moral judgment, according to which deontological judgments are typically supported by intuitive, automatic processes while utilitarian judgments are typically supported by reflective, conscious processes. However, most of the evidence gathered in support of this model comes from the study of people’s judgments about sacrificial dilemmas, such as Trolley Problems. To which extent does this model generalize to other debates in which deontological and utilitarian judgments conflict, such as the existence of harmless moral violations, the difference between actions and omissions, the extent of our duties of assistance, and the appropriate justification for punishment? To find out, we conducted a series of five studies on the role of reflection in these kinds of moral conundrums. In Study 1, participants were asked to answer under cognitive load. In Study 2, participants had to answer under a strict time constraint. In Studies 3 to 5, we sought to promote reflection through exposure to counter-intuitive reasoning problems or direct instruction. Overall, our results offer strong support to the extension of Greene’s dual-process model to moral debates on the existence of harmless violations and partial support to its extension to moral debates on the extent of our duties of assistance.

From the Discussion Section

The results of  Study 1 led  us to conclude  that certain forms  of utilitarian judgments  require more cognitive resources than their deontological counterparts. The results of Studies 2 and 5 suggest  that  making  utilitarian  judgments  also  tends  to  take  more  time  than  making deontological judgments. Finally, because our manipulation was unsuccessful, Studies 3 and 4 do not allow us to conclude anything about the intuitiveness of utilitarian judgments. In Study 5, we were nonetheless successful in manipulating participants’ reliance on their intuitions (in the  Intuition  condition),  but this  did not  seem to  have any  effect on  the  rate  of utilitarian judgments.  Overall,  our  results  allow  us  to  conclude  that,  compared  to  deontological judgments,  utilitarian  judgments  tend  to  rely  on  slower  and  more  cognitively  demanding processes.  But  they  do  not  allow  us  to  conclude  anything  about  characteristics  such  as automaticity  or  accessibility  to consciousness:  for  example,  while solving  a very  difficult math problem might take more time and require more resources than solving a mildly difficult math problem, there is no reason to think that the former process is more automatic and more conscious than the latter—though it will clearly be experienced as more effortful. Moreover, although one could be tempted to  conclude that our data show  that, as predicted by Greene, utilitarian  judgment  is  experienced  as  more  effortful  than  deontological  judgment,  we collected no data  about such  experience—only data  about the  role of cognitive resources in utilitarian  judgment.  Though  it  is  reasonable  to  think  that  a  process  that  requires  more resources will be experienced as more effortful, we should keep in mind that this conclusion is based on an inferential leap. 

Tuesday, February 9, 2021

Neanderthals And Humans Were at War For Over 100,000 Years, Evidence Shows

Nicholas Longrich
The Conversation
Originally posted 3 Nov 20

Here is an excerpt:

Why else would we take so long to leave Africa? Not because the environment was hostile but because Neanderthals were already thriving in Europe and Asia.

It's exceedingly unlikely that modern humans met the Neanderthals and decided to just live and let live. If nothing else, population growth inevitably forces humans to acquire more land, to ensure sufficient territory to hunt and forage food for their children.

But an aggressive military strategy is also good evolutionary strategy.

Instead, for thousands of years, we must have tested their fighters, and for thousands of years, we kept losing. In weapons, tactics, strategy, we were fairly evenly matched.

Neanderthals probably had tactical and strategic advantages. They'd occupied the Middle East for millennia, doubtless gaining intimate knowledge of the terrain, the seasons, how to live off the native plants and animals.

In battle, their massive, muscular builds must have made them devastating fighters in close-quarters combat. Their huge eyes likely gave Neanderthals superior low-light vision, letting them manoeuvre in the dark for ambushes and dawn raids.

Sapiens victorious

Finally, the stalemate broke, and the tide shifted. We don't know why. It's possible the invention of superior ranged weapons – bows, spear-throwers, throwing clubs – let lightly-built Homo sapiens harass the stocky Neanderthals from a distance using hit-and-run tactics.

Or perhaps better hunting and gathering techniques let sapiens feed bigger tribes, creating numerical superiority in battle.

Even after primitive Homo sapiens broke out of Africa 200,000 years ago, it took over 150,000 years to conquer Neanderthal lands. In Israel and Greece, archaic Homo sapiens took ground only to fall back against Neanderthal counteroffensives, before a final offensive by modern Homo sapiens, starting 125,000 years ago, eliminated them.

Monday, February 8, 2021

The Origins and Psychology of Human Cooperation

Joseph Henrich and Michael Muthukrishna
Annual Review of Psychology 2021 72:1, 207-240


Humans are an ultrasocial species. This sociality, however, cannot be fully explained by the canonical approaches found in evolutionary biology, psychology, or economics. Understanding our unique social psychology requires accounting not only for the breadth and intensity of human cooperation but also for the variation found across societies, over history, and among behavioral domains. Here, we introduce an expanded evolutionary approach that considers how genetic and cultural evolution, and their interaction, may have shaped both the reliably developing features of our minds and the well-documented differences in cultural psychologies around the globe. We review the major evolutionary mechanisms that have been proposed to explain human cooperation, including kinship, reciprocity, reputation, signaling, and punishment; we discuss key culture–gene coevolutionary hypotheses, such as those surrounding self-domestication and norm psychology; and we consider the role of religions and marriage systems. Empirically, we synthesize experimental and observational evidence from studies of children and adults from diverse societies with research among nonhuman primates.

From the Discussion

Understanding the origins and psychology of human cooperation is an exciting and rapidly developing enterprise. Those interested in engaging with this grand question should consider three elements of this endeavor: (1) theoretical frameworks, (2) diverse methods, and (3) history. To the first, the extended evolutionary framework we described comes with a rich body of theories and hypotheses as well as tools for developing new theories, about both human nature and cultural psychology. We encourage psychologists to take the formal theory seriously and learn to read the primary literature (McElreath & Boyd 2007). Second, the nature of human cooperation demands cross-cultural, comparative and developmental approaches that integrate experiments, observation, and ethnography. Haphazard cross-country cyber sampling is less efficient than systematic tests with populations based on theoretical predictions. Finally, the evidence makes it clear that as norms evolve over time, so does our psychology; historical differences can tell us a lot about contemporary psychological patterns. This means that researchers need to think about psychology from a historical perspective and begin to devise ways to bring history and psychology together (Muthukrishna et al. 2020).

Sunday, February 7, 2021

How people decide what they want to know

Sharot, T., Sunstein, C.R. 
Nat Hum Behav 4, 14–19 (2020). 


Immense amounts of information are now accessible to people, including information that bears on their past, present and future. An important research challenge is to determine how people decide to seek or avoid information. Here we propose a framework of information-seeking that aims to integrate the diverse motives that drive information-seeking and its avoidance. Our framework rests on the idea that information can alter people’s action, affect and cognition in both positive and negative ways. The suggestion is that people assess these influences and integrate them into a calculation of the value of information that leads to information-seeking or avoidance. The theory offers a framework for characterizing and quantifying individual differences in information-seeking, which we hypothesize may also be diagnostic of mental health. We consider biases that can lead to both insufficient and excessive information-seeking. We also discuss how the framework can help government agencies to assess the welfare effects of mandatory information disclosure.


It is increasingly possible for people to obtain information that bears on their future prospects, in terms of health, finance and even romance. It is also increasingly possible for them to obtain information about the past, the present and the future, whether or not that information bears on their personal lives. In principle, people’s decisions about whether to seek or avoid information should depend on some integration of instrumental value, hedonic value and cognitive value. But various biases can lead to both insufficient and excessive information-seeking. Individual differences in information-seeking may reflect different levels of susceptibility to those biases, as well as varying emphasis on instrumental, hedonic and cognitive utility.  Such differences may also be diagnostic of mental health.

Whether positive or negative, the value of information bears directly on significant decisions of government agencies, which are often charged with calculating the welfare effects of mandatory disclosure and which have long struggled with that task. Our hope is that the integrative framework of information-seeking motives offered here will facilitate these goals and promote future research in this important domain.

Saturday, February 6, 2021

The Cognitive Neuroscience of Moral Judgment and Decision-Making

Joshua Green & Liane Young
The Cognitive Neurosciences 
(p. 1013–1023). MIT Press.


This article reviews recent history and advances in the cognitive neuroscience of moral judgment and behavior. This field is conceived not as the study of a distinct set of neural functions but as an attempt to understand how the brain’s core neural systems coordinate to solve problems that we define, for nonneuroscientific reasons, as “moral.” At the heart of moral cognition are representations of value and the ways in which they are encoded, acquired, and modulated.  Research dissociates distinct value representations—often within a dual-process framework—and explores the ways in which representations of value are informed or modulated by knowledge of mental states, explicit decision rules, the imagination of distal events, and social cues. Studies illustrating these themes examine the brains of morally pathological individuals, the responses of healthy brains to prototypically immoral actions, and the brain’s responses to more complex philosophical and economic dilemmas.

Here is an excerpt:

Cooperative Brains

Research on altruism and cooperation, though often considered apart from “morality,” could not be more
central to our understanding of the moral brain. The most basic question about the cognitive neuroscience
of altruism and cooperation is this: What neural processes enable and motivate people to be “nice”—that is, to pay costs to benefit others?

Consistent with our evolving story, the value of helping others, in both unidirectional altruism and bidirectional cooperation, is represented in the frontostriatal pathway and modulated by both economic incentives and social signals (Declerck, Boone, & Emonds, 2013).  Activity in this pathway tracks the value of charitable contributions (Moll et al., 2006) and of sharing resources with other individuals (Zaki & Mitchell, 2011). Likewise, it encodes the discounted value of rewards gained at the expense of others (Crockett, Siegel, Kurth-Nelson, Dayan, & Dolan, 2017). Here, signals from the DLPFC appear to modulate striatal signals, resulting in more altruistic behavior. The same pattern is observed in the case of increased altruism following compassion training (Weng et al., 2013). Striatal signals, likewise, track the value of punishing transgressors (Crockett et  al., 2013; de Quervain et al., 2004; Singer et al., 2006). And, as above, the DMN appears to have a hand in altruism:TPJ volume (Morishima, Schunk, Bruhin, Ruff, & Fehr, 2012) and medial PFC activity (Waytz, Zaki, & Mitchell, 2012) both predict altruistic behavior, with more dorsal mPFC regions representing the value of rewards for others (Apps & Ramnani, 2014).

Friday, February 5, 2021

Shaking Things Up: Unintended Consequences of Firm Acquisitions on Racial and Gender Inequality

Letian Zhang
Harvard Business School
Originally published 23 Jan20


This paper develops a theory of how disruptive events shape organizational inequality.  Despite various organizational efforts, racial and gender inequality in the workplace remains high. I theorize that because the persistence of such inequality is reinforced by organizational structures and practices, disruptive events that shake up old hierarchies and break down routines and culture should give racial minority and women workers more opportunities to advance. To examine this theory, I explore a critical but seldom analyzed organizational event in the inequality literature - mergers and acquisitions. I propose that post-acquisition restructuring could offer an opportunity for firms to advance diversity initiatives and to objectively re-evaluate workers. Using a difference-in-differences design on a nationally representative sample covering 37,343 acquisitions from 1971 to 2015, I find that although acquisitions lead to occupational reconfiguration that favors higher-skilled workers, they also reduce racial and gender inequality. In particular, I find improved managerial representation of racial minorities and women and reduced racial and gender segregation in the acquired workplace. This post-acquisition effect is stronger when (a) the acquiring firm values race and gender equality more and (b) the acquired workplace had higher racial and gender inequality.  These findings suggest that disruptive events could produce an unintended consequence of increasing racial and gender equality in the workplace.

Managerial Implications

From a managerial perspective, disruptive events offer an opportunity to advance diversity or equality-related goals that might be difficult to pursue during normal times.  As my analyses show, acquisition amplifies the race and gender differences between those acquiring firms that value diversity and those that do not. For managers concerned about race and gender issues, acquisitions and other disruptive events might serve as suitable moments to improve race and gender gaps effectively and at a relatively lower cost. Thus, despite the disruption and uncertainty during these periods, managers should see disruptive events as prime opportunities to make positive changes.

Thursday, February 4, 2021

Robust inference of positive selection on regulatory sequences in the human brain

J. Liu & M. Robison-Rechavi
Science Advances  27 Nov 2020:
Vol. 6, no. 48, eabc9863


A longstanding hypothesis is that divergence between humans and chimpanzees might have been driven more by regulatory level adaptations than by protein sequence adaptations. This has especially been suggested for regulatory adaptations in the evolution of the human brain. We present a new method to detect positive selection on transcription factor binding sites on the basis of measuring predicted affinity change with a machine learning model of binding. Unlike other methods, this approach requires neither defining a priori neutral sites nor detecting accelerated evolution, thus removing major sources of bias. We scanned the signals of positive selection for CTCF binding sites in 29 human and 11 mouse tissues or cell types. We found that human brain–related cell types have the highest proportion of positive selection. This result is consistent with the view that adaptive evolution to gene regulation has played an important role in evolution of the human brain.


With only 1 percent difference, the human and chimpanzee protein-coding genomes are remarkably similar. Understanding the biological features that make us human is part of a fascinating and intensely debated line of research. Researchers have developed a new approach to pinpoint adaptive human-specific changes in the way genes are regulated in the brain.

Wednesday, February 3, 2021

Research on Non-verbal Signs of Lies and Deceit: A Blind Alley

T. Brennen & S. Magnussen
Front. Psychol., 14 December 2020


Research on the detection of lies and deceit has a prominent place in the field of psychology and law with a substantial research literature published in this field of inquiry during the last five to six decades (Vrij, 2000, 2008; Vrij et al., 2019). There are good reasons for this interest in lie detection. We are all everyday liars, some of us more prolific than others, we lie in personal and professional relationships (Serota et al., 2010; Halevy et al., 2014; Serota and Levine, 2015; Verigin et al., 2019), and lying in public by politicians and other public figures has a long and continuing history (Peters, 2015). However, despite the personal problems that serious everyday lies may cause and the human tragedies political lies may cause, it is lying in court that appears to have been the principal initial motivation for the scientific interest in lie detection.

Lying in court is a threat to fair trials and the rule of law. Lying witnesses may lead to the exoneration of guilty persons or to the conviction of innocent ones. In the US it is well-documented that innocent people have been convicted because witnesses were lying in court (Garrett, 2010, 2011; www.innocenceproject.com). In evaluating the reliability and the truthfulness of a testimony, the court considers other evidence presented to the court, the known facts about the case and the testimonies by other witnesses. Inconsistency with the physical evidence or the testimonies of other witnesses might indicate that the witness is untruthful, or it may simply reflect the fact that the witness has observed, interpreted, and later remembered the critical events incorrectly—normal human errors all too well known in the eyewitness literature (Loftus, 2005; Wells and Loftus, 2013; Howe and Knott, 2015).

(as it ends)

Is the rational course simply to drop this line of research? We believe it is. The creative studies carried out during the last few decades have been important in showing that psychological folklore, the ideas we share about behavioral signals of lies and deceit are not correct. This debunking function of science is extremely important. But we have now sufficient evidence that there are no specific non-verbal behavioral signals that accompany lying or deceitful behavior. We can safely recommend that courts disregard such behavioral signals when appraising the credibility of victims, witnesses, and suspected offenders. For psychology and law researchers it may be time to move on.

Monday, February 1, 2021

Does civility pay?

Porath, C. L., & Gerbasi, A. (2015). 
Organizational Dynamics, 44(4), 281–286.


Being nice may bring you friends, but does it help or harm you in your career? After all, research by Timothy Judge and colleagues shows a negative relationship between a person’s agreeableness and income. Research by Amy Cuddy has shown that warm people are perceived to be less competent, which is likely to have negative career implications. People who buck social rules by treating people rudely and getting away with it tend to garner power. If you are civil you may be perceived as weak, and ignored or taken advantage. Being kind or considerate may be hazardous to your self-esteem, goal achievement, influence, career, and income.  Over the last two decades we have studied the costs of incivility–—and the benefits of civility. We’ve polled tens of thousands of workers across industries around the world about how they’re treated on the job and the effects. The costs of incivility are enormous. Organizations and their employees would be much more likely to thrive if employees treated each other respectfully.  Many see civility as an investment and are skeptical about the potential returns. Porath surveyed of hundreds across organizations spanning more than 17 industries and found that a quarter believe that they will be less leader-like, and nearly 40 percent are afraid that they’ll be taken advantage of if they’re nice at work. Nearly half think that is better to flex your muscles to garner power.  In network studies of a biotechnology firm and international MBAs, along with surveys, and experiments, we address whether civility pays. In this article we discuss our findings and propose recommendations for leaders and organizations.



Civility pays. It is a potent behavior you want to master to enhance your influence and effectiveness. It is unique in the sense that it elicits both warmth and competence–—the two characteristics that account for over 90 percent of positive impressions. By being respectful you enhance–—not deter–—career opportunities and effectiveness.