Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, April 30, 2023

The secrets of cooperation

Bob Holmes
Originally published 29 MAR 23

Here are two excerpts:

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation.

Behavioral scientists’ formal definition of cooperation involves paying a personal cost (for example, contributing to charity) to gain a collective benefit (a social safety net). But freeloaders enjoy the same benefit without paying the cost, so all else being equal, freeloading should be an individual’s best choice — and, therefore, we should all be freeloaders eventually.

Many millennia of evolution acting on both our genes and our cultural practices have equipped people with ways of getting past that obstacle, says Muthukrishna, who coauthored a look at the evolution of cooperation in the 2021 Annual Review of Psychology. This cultural-genetic coevolution stacked the deck in human society so that cooperation became the smart move rather than a sucker’s choice. Over thousands of years, that has allowed us to live in villages, towns and cities; work together to build farms, railroads and other communal projects; and develop educational systems and governments.

Evolution has enabled all this by shaping us to value the unwritten rules of society, to feel outrage when someone else breaks those rules and, crucially, to care what others think about us.

“Over the long haul, human psychology has been modified so that we’re able to feel emotions that make us identify with the goals of social groups,” says Rob Boyd, an evolutionary anthropologist at the Institute for Human Origins at Arizona State University.


Reputation is more powerful than financial incentives in encouraging cooperation

Almost a decade ago, Yoeli and his colleagues trawled through the published literature to see what worked and what didn’t at encouraging prosocial behavior. Financial incentives such as contribution-matching or cash, or rewards for participating, such as offering T-shirts for blood donors, sometimes worked and sometimes didn’t, they found. In contrast, reputational rewards — making individuals’ cooperative behavior public — consistently boosted participation. The result has held up in the years since. “If anything, the results are stronger,” says Yoeli.

Financial rewards will work if you pay people enough, Yoeli notes — but the cost of such incentives could be prohibitive. One study of 782 German residents, for example, surveyed whether paying people to receive a Covid vaccine would increase vaccine uptake. It did, but researchers found that boosting vaccination rates significantly would have required a payment of at least 3,250 euros — a dauntingly steep price.

And payoffs can actually diminish the reputational rewards people could otherwise gain for cooperative behavior, because others may be unsure whether the person was acting out of altruism or just doing it for the money. “Financial rewards kind of muddy the water about people’s motivations,” says Yoeli. “That undermines any reputational benefit from doing the deed.”

Saturday, April 29, 2023

Observation moderates the moral licensing effect: A meta-analytic test of interpersonal and intrapsychic mechanisms.

Rotella, A., Jung, J., Chinn, C., 
& Barclay, P. (2023, March 28).


Moral licensing occurs when someone who initially behaved morally subsequently acts less morally. We apply reputation-based theories to predict when and why moral licensing would occur. Specifically, our pre-registered predictions were that (1) participants observed during the licensing manipulation would have larger licensing effects, and (2) unambiguous dependent variables would have smaller licensing effects. In a pre-registered multi-level meta-analysis of 111 experiments (N = 19,335), we found a larger licensing effect when participants were observed (Hedge’s g = 0.61) compared to unobserved (Hedge’s g = 0.14). Ambiguity did not moderate the effect. The overall moral licensing effect was small (Hedge’s g = 0.18). We replicated these analyses using robust Bayesian meta-analysis and found strong support for the moral licensing effect only when participants are observed. These results suggest that the moral licensing effect is predominantly an interpersonal effect based on reputation, rather than an intrapsychic effect based on self-image.

Statement of Relevance

When and why will people behave morally?Everyday, people make decisions to act in ways that are more or less moral –holding a door open for others, donating to charity, or assistant a colleague. Yet, it is not well understood how people’s prior actions influence their subsequent behaviors. In this study, we investigated how observation influences the moral licensing effect, which is when someone who was initially moral subsequently behaves less morally, as if they had“license” to act badly.  In a review of existing literature, we found a larger moral licensing effect when people were seen to act morally compared to when they were unobserved, which suggests that once someone establishes a moral reputation to others, they can behave slightly less moral and maintain a moral reputation. This finding advances our understanding of the moral licensing mechanism and how reputation and observation impact moral actions.

Friday, April 28, 2023

Filling in the Gaps: False Memories and Partisan Bias

Armaly, M.T. & Enders, A.
Political Psychology, Vol. 0, No. 0, 2022
doi: 10.1111/pops.12841


While cognitive psychologists have learned a great deal about people's propensity for constructing and acting on false memories, the connection between false memories and politics remains understudied. If partisan bias guides the adoption of beliefs and colors one's interpretation of new events and information, so too might it prove powerful enough to fabricate memories of political circumstances. Across two studies, we first distinguish false memories from false beliefs and expressive responses; false political memories appear to be genuine and subject to partisan bias. We also examine the political and psychological correlates of false memories. Nearly a third of respondents reported remembering a fabricated or factually altered political event, with many going so far as to convey the circumstances under which they “heard about” the event. False-memory recall is correlated with the strength of partisan attachments, interest in politics, and participation, as well as narcissism, conspiratorial thinking, and cognitive ability.


While cognitive psychologists have learned a great deal about people’s propensity for constructing and acting on false memories, the role of false memories in political attitudes has received scant attention. In this study, we built on previous work by investigating the partisan foundations and political and psychological correlates of false memories. We found that nearly a third of respondents reported remembering a fabricated or factually altered political event. These false memories are not mere beliefs or expressive responses; indeed, most respondents conveyed where they “heard about” at least one event in question, with some providing vivid details of their circumstances. We also found that false memory is associated with the strength of one’s partisan attachments, conspiracism, and interest in politics, among other factors.

Altogether, false memories seem to behave like a form of partisan bias: The more in touch one is with politics, especially the political parties, the more susceptible they are to false- memory construction. While we cannot ascribe causality, uncovering this (likely) mechanism has several implications. First, the more polarized we become, the more likely individuals may be to con-struct false memories about in-  and outgroups. In turn, the falser memories one constructs about the greatness of one’s ingroup and the evil doings of the outgroup, the higher the temperature of polarization rises. Second, false- memory construction may be one mechanism by which mis-information takes hold psychologically. By exposing people to information they are motivated to believe, skilled traffickers of misinformation may be able to not only convince one to be-lieve something but convince them that something which never transpired actually did so. The conviction that accompanies memory— people’s natural tendency to believe their memories are trustworthy— makes false memories a particularly pernicious route by which to manipulate those subject to this bias. Indeed, this is precisely the concern presented by “deepfakes”— images and videos that have been expertly altered or fabricated for the purpose of exploiting targeted viewers. Finally, and relatedly, politicians may be able to induce false memories, strategically molding a past reality to suit their political will.

Thursday, April 27, 2023

A dark side of hope: Understanding why investors cling onto losing stocks

Luo, S. X., et al. (2022).
Journal of Behavioral Decision Making.


Investors are often inclined to keep losing stocks too long, despite this being irrational. This phenomenon is part of the disposition effect (“people ride losers too long, and sell winners too soon”). The current research examines the role of hope as a potential explanation of why people ride losers too long. Three correlational studies (1A, 1B, and 2) find that people's trait hope is positively associated with their inclination to keep losing stocks, regardless of their risk-seeking tendency (Study 2). Further, three experimental studies (3, 4, and 5) reveal that people are inclined to hold on to losing (vs. not-losing) stocks because of their hope to break even and not because of their hope to gain. Studies 4 and 5 provide process evidence confirming the role of hope and indicate potential interventions to decrease people's tendency to keep losing stocks by reducing the hope. The findings contribute to the limited empirical literature that has investigated how emotions influence the disposition effect by providing empirical evidence for the role of hope. Moreover, the findings add to the literature of hope by revealing its role in financial decision-making and show a “dark side” of this positive emotion.

General Discussion

Investors are reluctant to sell their losing stocks, which is part of the well-known disposition effect (Shefrin & Statman, 1985). Why would investors do so, especially when it is a suboptimal financial decision? In a series of studies, we found consistent support for the idea that the emotion of hope explains at least partly why people hold on to their losing stocks. Studies 1A and 1B revealed that an increase in people's trait hope (measured by two trait hope scales) increases their inclination to keep losing stocks. Study 2 further confirmed that the trait hope is positively associated with the inclination to keep losing stocks, controlling for the influence of the risk-taking tendency of real-world investors. In Study 3, we developed a simple and effective experimental design to examine whether losing influences hope and people's tendency to keep stocks in the same way. In addition, it differentiated between what people hope for: to break even versus to gain. The results indicate that when one's stocks are losing, compared with when they are not, people experience a stronger hope to break even and an inclination to keep, but not a stronger hope to gain. In addition, Study 3 found that losing (vs. not losing) leads to a stronger inclination to keep stocks.

Moreover, the hope to break even (but not the hope to gain) mediated the effect of losing on the inclination to keep. Study 4 found that reducing people's hope to break even decreases their inclination to keep their losing stocks to the same level as when their stocks did not decrease in price. Study 5 found that people tend to have a lower hope to break even when holding stocks on behalf of others (vs. for themselves) and thus tend to be less likely to keep the losing stocks. Studies 4 and 5 provided process evidence that reducing hope attenuates the inclination to keep, suggesting two possible interventions focusing on the possibility or the desire feature of hope. In a series of studies, we found that people cling to losing stocks because they hope to break even, and reducing this hope decreases their inclination to keep the losing stocks.

Wednesday, April 26, 2023

A Prosociality Paradox: How Miscalibrated Social Cognition Creates a Misplaced Barrier to Prosocial Action

Epley, N., Kumar, A., Dungan, J., &
Echelbarger, M. (2023).
Current Directions in Psychological Science,
32(1), 33–41. 


Behaving prosocially can increase well-being among both those performing a prosocial act and those receiving it, and yet people may experience some reluctance to engage in direct prosocial actions. We review emerging evidence suggesting that miscalibrated social cognition may create a psychological barrier that keeps people from behaving as prosocially as would be optimal for both their own and others’ well-being. Across a variety of interpersonal behaviors, those performing prosocial actions tend to underestimate how positively their recipients will respond. These miscalibrated expectations stem partly from a divergence in perspectives, such that prosocial actors attend relatively more to the competence of their actions, whereas recipients attend relatively more to the warmth conveyed. Failing to fully appreciate the positive impact of prosociality on others may keep people from behaving more prosocially in their daily lives, to the detriment of both their own and others’ well-being.

Undervaluing Prosociality

It may not be accidental that William James (1896/1920) named “the craving to be appreciated” as “the deepest principle in human nature” only after receiving a gift of appreciation that he described as “the first time anyone ever treated me so kindly.” “I now perceive one immense omission in my [Principles of Psychology],” he wrote regarding the importance of appreciation. “I left it out altogether . . . because I had never had it gratified till now” (p. 33).

James does not seem to be unique in failing to recognize the positive impact that appreciation can have on recipients. In one experiment (Kumar & Epley, 2018, Experiment 1), MBA students thought of a person they felt grateful to, but to whom they had not yet expressed their appreciation. The students, whom we refer to as expressers, wrote a gratitude letter to this person and then reported how they expected the recipient would feel upon receiving it: how surprised the recipient would be to receive the letter, how surprised the recipient would be about the content, how negative or positive the recipient would feel, and how awkward the recipient would feel. Expressers willing to do so then provided recipients’ email addresses so the recipients could be contacted to report how they actually felt receiving their letter. Although expressers recognized that the recipients would feel positive, they did not recognize just how positive the recipients would feel: Expressers underestimated how surprised the recipients would be to receive the letter, how surprised the recipients would be by its content, and how positive the recipients would feel, whereas they overestimated how awkward the recipients would feel. Table 1 shows the robustness of these results across an additional published experiment and 17 subsequent replications (see Fig. 1 for overall results; full details are available at OSF: osf.io/7wndj/). Expressing gratitude has a reliably more positive impact on recipients than expressers expect.


How much people genuinely care about others has been debated for centuries. In summarizing the purely selfish viewpoint endorsed by another author, Thomas Jefferson (1854/2011) wrote, “I gather from his other works that he adopts the principle of Hobbes, that justice is founded in contract solely, and does not result from the construction of man.” Jefferson felt differently: “I believe, on the contrary, that it is instinct, and innate, that the moral sense is as much a part of our constitution as that of feeling, seeing, or hearing . . . that every human mind feels pleasure in doing good to another” (p. 39).

Such debates will never be settled by simply observing human behavior because prosociality is not simply produced by automatic “instinct” or “innate” disposition, but rather can be produced by complicated social cognition (Miller, 1999). Jefferson’s belief that people feel “pleasure in doing good to another” is now well supported by empirical evidence. However, the evidence we reviewed here suggests that people may avoid experiencing this pleasure not because they do not want to be good to others, but because they underestimate just how positively others will react to the good being done to them.

Tuesday, April 25, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., Sifferd, K. 
Ethic Theory Moral Prac (2023).


Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.


In this paper we raised two challenges to McGeer’s scaffolded reasons-responsiveness account: agents who are less attuned to social feedback such as autistics, and corrupting moral audiences. We found that, once we parsed the two roles that feedback from a moral audience play, autistics provide reasons to revise the scaffolded reasons-responsiveness account. We argued that autistic persons, like neurotypicals, wish to justify their behaviour to a moral audience and rely on their moral audience for feedback. However, autistic persons may need more explicit feedback when it comes to effects their behaviour has on others. They also compensate for difficulties they have in receiving information from the moral audience by justifying action through appeal to moral rules. This shows that McGeer’s view of moral agency needs to include observance of moral rules as a way of reducing reliance on audience feedback. We suspect that McGeer would approve of this proposal, as she mentions that an instance of blame can lead to vocal protest by the target, and a possible renegotiation of norms and rules for what constitutes acceptable behaviour (2019). Consideration of corrupting audiences highlights a different problem from that of resisting blame and renegotiating norms. It draws attention to cases where individual agents must try to go beyond what is accepted in their moral environment, a significant challenge for social beings who rely strongly on moral audiences in developing and calibrating their moral reasons-responsiveness. Resistance to a moral audience requires the capacity to evaluate the action differently; often this will be with reference to a moral rule or principle.

For both neurotypical and autistic individuals, consistent application of moral rules or principles can reinforce and bring back to mind important moral commitments when we are led astray by our own desires or specific (im)moral audiences. But moral audiences still play a crucial role to developing and maintaining reasons-responsiveness. First, they are essential to the development and maintenance of all agents’ moral sensitivity. Second, they can provide an important moral corrective where people may have moral blindspots, especially when they provide insights into ways in which a person has fallen short morally by not taking on board reasons that are not obvious to them. Often, these can be reasons which pertain to the respectful treatment of others who are in some important way different from that person.

In sum: Be responsible and accountable in your actions, as your moral audience is always watching. Doing the right thing matters not just for your reputation, but for the greater good. #ResponsibleAgency #MoralAudience

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.

In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Sunday, April 23, 2023

Produced and counterfactual effort contribute to responsibility attributions in collaborative tasks

Xiang, Y., Landy, J., et al. (2023, March 8). 


How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator's part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.

General discussion

Responsibility for the outcomes of collaborations is often distributed unevenly. For example, the lead author on a project may get the bulk of the credit for a scientific discovery, the head of a company may  shoulder the blame for a failed product, and the lazier of two friends may get the greater share of blame  for failing to lift a couch.  However, past work has provided conflicting accounts of the computations that drive responsibility attributions in collaborative tasks.  Here, we compared each of these accounts against human responsibility attributions in a simple collaborative task where two agents attempted to lift a box together.  We contrasted seven models that predict responsibility judgments based on metrics proposed in past work, comprising three production-style models (Force, Strength, Effort), three counterfactual-style models (Focal-agent-only, Non-focal-agent-only, Both-agent), and one Ensemble model that combines the best-fitting production- and counterfactual-style models.  Experiment 1a and Experiment 1b showed that theEffort model and the Both-agent counterfactual model capture the data best among the production-style models and the counterfactual-style models, respectively.  However, neither provided a fully adequate fit on their own.  We then showed that predictions derived from the average of these two models (i.e., the Ensemble model) outperform all other models, suggesting that responsibility judgments are likely a combination of production-style reasoning and counterfactual reasoning.  Further evidence came from analyses performed on individual participants, which revealed that he Ensemble model explained more participants’ data than any other model.  These findings were subsequently supported by Experiment 2a and Experiment 2b, which replicated the results when additional force information was shown to the participants, and by Experiment 3, which validated the model predictions with a broader range of stimuli.

Summary: Effort exerted by each member & counterfactual thinking play a crucial role in attributing responsibility for success or failure in collaborative tasks. This study suggests that higher effort leads to more responsibility for success, while lower effort leads to more responsibility for failure.

Saturday, April 22, 2023

A Psychologist Explains How AI and Algorithms Are Changing Our Lives

Danny Lewis
The Wall Street Journal
Originally posted 21 MAR 23

In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.

These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty. 

In his latest book, “How to Stay Smart in a Smart World,” Dr. Gigerenzer looks at how algorithms are shaping our future—and why it is important to remember they aren’t human. He spoke with the Journal for The Future of Everything podcast.

The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?

It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well. I call this the “stable world principle,” and it helps you as a first clue about what AI can do. It also tells you that, in order to get the most out of AI, we have to make the world more predictable.

So after all these decades of computer science, are algorithms really just still calculators at the end of the day, running more and more complex equations?

What else would they be? A deep neural network has many, many layers, but they are still calculating machines. They can do much more than ever before with the help of video technology. They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do.

Friday, April 21, 2023

Moral Shock

Stockdale, K. (2022).
Journal of the American Philosophical
Association, 8(3), 496-511.


This paper defends an account of moral shock as an emotional response to intensely bewildering events that are also of moral significance. This theory stands in contrast to the common view that shock is a form of intense surprise. On the standard model of surprise, surprise is an emotional response to events that violated one's expectations. But I show that we can be morally shocked by events that confirm our expectations. What makes an event shocking is not that it violated one's expectations, but that the content of the event is intensely bewildering (and bewildering events are often, but not always, contrary to our expectations). What causes moral shock is, I argue, our lack of emotional preparedness for the event. And I show that, despite the relative lack of attention to shock in the philosophical literature, the emotion is significant to moral, social, and political life.


I have argued that moral shock is an emotional response to intensely bewildering events that are also of moral significance. Although shock is typically considered to be an intense form of surprise, where surprise is an emotional response to events that violate our expectations or are at least unexpected, I have argued that the contrary-expectation model is found wanting. For it seems that we are sometimes shocked by the immoral actions of others even when we expected them to behave in just the ways that they did. What is shocking is what is intensely bewildering—and the bewildering often, but not always, tracks the unexpected. The extent to which such events shock us is, I have argued, a function of our felt readiness to experience them. When we are not emotionally prepared for what we expect to occur, we might find ourselves in the grip of moral shock.

There is much more to be said about the emotion of moral shock and its significance to moral, social, and political life. This paper is meant to be a starting point rather than a decisive take on an undertheorized emotion. But by understanding more deeply the nature and effects of moral shock, we can gain richer insight into a common response to immoral actions; what prevents us from responding well in the moment; and how the brief and fleeting, yet intense events in our lives affect agency, responsibility, and memory. We might also be able to make better sense of the bewildering social and political events that shock us and those to which we have become emotionally resilient.

This appears to be a philosophical explication of "Moral Injury", as can be found multiple places on this web site.

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).


One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.


There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.

This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 


In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.


Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.

Tuesday, April 18, 2023

We need an AI rights movement

Jacy Reese Anthis
The Hill
Originally posted 23 MAR 23

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary Marcus, Judea Pearl, Noam Chomsky, and others that the models lack causal understanding.

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood. 

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Monday, April 17, 2023

Generalized Morality Culturally Evolves as an Adaptive Heuristic in Large Social Networks

Jackson, J. C., Halberstadt, J., et al.
(2023, March 22).


Why do people assume that a generous person should also be honest? Why can a single criminal conviction destroy someone’s moral reputation? And why do we even use words like “moral” and “immoral”? We explore these questions with a new model of how people perceive moral character. According to this model, people can vary in the extent that they perceive moral character as “localized” (varying across many contextually embedded dimensions) vs. “generalized” (varying along a single dimension from morally bad to morally good). This variation might be at least partly the product of cultural evolutionary adaptations to predicting cooperation in different kinds of social networks. As networks grow larger and more complex, perceptions of generalized morality are increasingly valuable for predicting cooperation during partner selection, especially in novel contexts. Our studies show that social network size correlates with perceptions of generalized morality in US and international samples (Study 1), and that East African hunter-gatherers with greater exposure outside their local region perceive morality as more generalized compared to those who have remained in their local region (Study 2). We support the adaptive value of generalized morality in large and unfamiliar social networks with an agent-based model (Study 3), and experimentally show that generalized morality outperforms localized morality when people predict cooperation in contexts where they have incomplete information about previous partner behavior (Study 4). Our final study shows that perceptions of morality have become more generalized over the last 200 years of English-language history, which suggests that it may be co-evolving with rising social complexity and anonymity in the English-speaking world (Study 5). We also present several supplemental studies which extend our findings. We close by discussing the implications of this theory for the cultural evolution of political systems, religion, and taxonomical theories of morality.

General Discussion

The word“moral” has taken a strange journey over the last several centuries. The word did not yet exist when Plato and Aristotle composed their theories of virtue. It was only when Cicero translated Aristotle’s Nicomachean Ethics that he coined the term “moralis” as the Latin translation of Aristotle’s “ēthikós”(Online Etymology Dictionary, n.d.).It is an ironic slight to Aristotle—who favored concrete particulars in lieu of abstract forms—that the word has become increasingly abstract and all-encompassing throughout its lexical evolution, with a meaning that now approaches Plato’s “form of the good.” We doubt that this semantic drift isa coincidence.

Instead, it may signify a cultural evolutionary shift in people’s perceptions of moral character as increasingly generalized as people inhabit increasingly larger and more unfamiliar social networks. Here we support this perspective with five studies. Studies 1-2 find that social network size correlates with the prevalence of generalized morality. Studies 1a-b explicitly tie beliefs in generalized morality to social network size with large surveys.  Study 2 conceptually replicates this finding in a Hadza hunter-gatherer camp, showing that Hadza hunter-gatherers with more external exposure perceive their campmates using more generalized morality. Studies 3-4 show that generalized morality can be adaptive for predicting cooperation in large and unfamiliar networks. Study 3 is an agent-based model which shows that, given plausible assumptions, generalized morality becomes increasingly valuable as social networks grow larger and less familiar. Study 4 is an experiment which shows that generalized morality is particularly valuable when people interact with unfamiliar partners in novel situations. Finally, Study 5 shows that generalized morality has risen over English-language history, such that words for moral attributes (e.g., fair, loyal, caring) have become more semantically generalizable over the last two hundred years of human history.

Sunday, April 16, 2023

The Relationship between Compulsive Sexual Behavior, Religiosity, and Moral Disapproval

Jennings, T., Lyng, T., et al. (2021).
Journal of Behavioral Addictions 10(4):854-878


Compulsive sexual behavior (CSB) is associated with religiosity and moral disapproval for sexual behaviors, and religiosity and moral disapproval are often used interchangeably in understanding moral incongruence. The present study expands prior research by examining relationships between several religious orientations and CSB and testing how moral disapproval contributes to these relationships via mediation analysis. Results indicated that religious orientations reflecting commitment to beliefs and rigidity in adhering to beliefs predicted greater CSB. Additionally, moral disapproval mediated relationships between several religiosity orientations and CSB. Overall, findings suggest that religiosity and moral disapproval are related constructs that aid in understanding CSB presentations.

From the Discussion Section

The relationship between CSB, religiosity, and spirituality

In general, the present review found that most studies reported a small to moderate positive relationship between CSB and religiosity. However, there were also many non-significant relationships reported (Kohut & Stulhofer, 2018; Reid et al., 2016; Skegg et al., 2010), as well as many associations that were very weak (Grubbs, Grant, et al., 2018;Grubbs, Kraus, et al., 2020; Lewczuk et al., 2020). The variety of measurement tools used, and constructs assessed across the literature, makes it difficult to draw more specific conclusions about the relationships between CSB and religiosity or spirituality. Divergent findings in the literature may be explained, in part, by the diverse measurement choices of researchers, as different aspects of CSB, religiosity, and spirituality are bound to have unique relationships with each other.

There are several notable considerations that may contribute to more consistent identification of a relationship between CSB and religiosity or spirituality. One of the most well-studied relationships in the literate is the association between PPU (Problematic Pornographic Use) and an aggregate measure of belief salience and religious participation, which, as noted in the meta-analysis by Grubbs, Perry, et al. (2019), have consistently been positively associated. This relationship is strongly mediated by moral incongruence, with this path accounting for a large portion of the variance. Notably, recent research indicates that MI is better conceptualized as an interactive effect of pornography use and moral disapproval of pornography (Grubbs, Kraus, et al., 2020;Grubbs, Lee, et al.,2020). These studies report that moral disapproval moderates the relationship between pornography use and PPU such that pornography use is more strongly related to PPU at higher levels of moral disapproval.

These considerations are especially important in evaluation of the literature because many studies identified in the present review did not consider the possible mediating or moderating role of moral incongruence. Therefore, it stands to reason, that many of the small to moderate associations identified in the present review are due to the absence of these variables.

Saturday, April 15, 2023

Resolving content moderation dilemmas between free speech and harmful misinformation

Kozyreva, A., Herzog, S. M., et al. (2023). 
PNAS of US, 120(7).


In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.


Content moderation of online speech is a moral minefield, especially when two key values come into conflict: upholding freedom of expression and preventing harm caused by misinformation. Currently, these decisions are made without any knowledge of how people would approach them. In our study, we systematically varied factors that could influence moral judgments and found that despite significant differences along political lines, most US citizens preferred quashing harmful misinformation over protecting free speech. Furthermore, people were more likely to remove posts and suspend accounts if the consequences of the misinformation were severe or if it was a repeated offense. Our results can inform the design of transparent, consistent rules for content moderation that the general public accepts as legitimate.


Content moderation is controversial and consequential. Regulators are reluctant to restrict harmful but legal content such as misinformation, thereby leaving platforms to decide what content to allow and what to ban. At the heart of policy approaches to online content moderation are trade-offs between fundamental values such as freedom of expression and the protection of public health. In our investigation of which aspects of content moderation dilemmas affect people’s choices about these trade-offs and what impact individual attitudes have on these decisions, we found that respondents’ willingness to remove posts or to suspend an account increased with the severity of the consequences of misinformation and whether the account had previously posted misinformation. The topic of the misinformation also mattered—climate change denial was acted on the least, whereas Holocaust denial and election denial were acted on more often, closely followed by antivaccination content. In contrast, features of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions. In sum, the individual characteristics of those who spread misinformation mattered little, whereas the amount of harm, repeated offenses, and type of content mattered the most.

Friday, April 14, 2023

The moral authority of ChatGPT

Krügel, S., Ostermaier, A., & Uhl, M.
Posted in 2023


ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it might improve the moral judgment and decisions of users, who often hold contradictory moral beliefs. Unfortunately, ChatGPT turns out highly inconsistent as a moral advisor. Nonetheless, it influences users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT threatens to corrupt rather than improves users’ judgment. These findings raise the question of how to ensure the responsible use of ChatGPT and similar AI. Transparency is often touted but seems ineffective. We propose training to improve digital literacy.


We find that ChatGPT readily dispenses moral advice although it lacks a firm moral stance. Indeed, the chatbot gives randomly opposite advice on the same moral issue.  Nonetheless, ChatGPT’s advice influences users’ moral judgment. Moreover, users underestimate ChatGPT’s influence and adopt its random moral stance as their own. Hence, ChatGPT threatens to corrupt rather than promises to improve moral judgment. Transparency is often proposed as a means to ensure the responsible use of AI. However, transparency about ChatGPT being a bot that imitates human speech does not turn out to affect how much it influences users.

Our results raise the question of how to ensure the responsible use of AI if transparency is not good enough. Rules that preclude the AI from answering certain questions are a questionable remedy. ChatGPT has such rules but can be brought to break them. Prior evidence suggests that users are careful about AI once they have seen it err. However, we probably should not count on users to find out about ChatGPT’s inconsistency through repeated interaction. The best remedy we can think of is to improve users’ digital literacy and help them understand the limitations of AI.

Thursday, April 13, 2023

Why artificial intelligence needs to understand consequences

Neil Savage
Originally published 24 FEB 23

Here is an excerpt:

The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.

In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.

A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.

Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.

This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.

Wednesday, April 12, 2023

Why Americans Hate Political Division but Can’t Resist Being Divisive

Will Blakely & Kurt Gray
Moral Understanding Substack
Originally posted 21 FEB 23

No one likes polarization. According to a recent poll, 93% of Americans say it is important to reduce the country's current divides, including two-thirds who say it is very important to do so. In a recent Five-Thirty-Eight poll, out of a list of 20 issues, polarization ranked third on a list of the most important issues facing America. Which is… puzzling.

The puzzle is this: How can we be so divided if no one wants to be? Who are the hypocrites causing division and hatred while paying lip service to compromise and tolerance?

If you ask everyday Americans, they’ve got their answer. It’s the elites. Tucker Carlson, AOC, Donald Trump, and MSNBC. While these actors certainly are polarizing, it takes two to tango. We, the people, share some of the blame too. Even us, writing this newsletter, and even you, dear reader.

But this leaves us with a tricky question, why would we contribute to a divide that we can’t stand? To answer this question, we need to understand the biases and motivations that influence how we answer the question, “Who’s at fault here?” And more importantly, we need to understand the strategies that can get us out of conflict.

The Blame Game

The Blame Game comes in two flavors: either/or. Adam or Eve, Will Smith or Chris Rock, Amber Heard or Jonny Depp. When assigning blame in bad situations, our minds are dramatic. Psychology studies show that we tend to assign 100% of the blame to the person we see as the aggressor, and 0% to the side we see as the victim. So, what happens when all the people who are against polarization assign blame for polarization? You guessed it. They give 100% of the blame to the opposing party and 0% to their own. They “morally typecast” themselves as 100% the victim of polarization and the other side as 100% the perpetrator.

We call this moral “typecasting” because people’s minds firmly cast others into roles of victim and victimizer in the same way that actors get typecasted in certain roles. In the world of politics, if you’re a Democrat, you cast Republicans as victimizers, as consistently as Hollywood directors cast Kevin Hart as comic relief and Danny Trejo as a laconic villain.

But why do we rush to this all-or-nothing approach when the world is certainly more complicated? It’s because our brains love simplicity. In the realm of blame, we want one simple cause. In his recent book, “Complicit” Max Bazerman, professor at Harvard Business School, illustrated just how widespread this “monocausality bias” is. Bazerman gave a group of business executives the opportunity to allocate blame after reviewing a case of business fraud. 62 of the 78 business leaders wrote only one cause. Despite being given ample time and a myriad set of potential causes, these executives intuitively reached for their Ockham’s razor. In the same way, we all rush to blame a sputtering economy on the president, a loss on a kicker’s missed field goal, or polarization on the other side.

Tuesday, April 11, 2023

Justice before Expediency: Robust Intuitive Concern for Rights Protection in Criminalization Decisions

Bystranowski, P., Hannikainen, I.R. J
Rev.Phil.Psych. (2023).


The notion that a false positive (false conviction) is worse than a false negative (false acquittal) is a deep-seated commitment in the theory of criminal law. Its most illustrious formulation, the so-called Blackstone’s ratio, affirms that “it is better that ten guilty persons escape than that one innocent suffer”. Are people’s evaluations of criminal statutes consistent with this tenet of the Western legal tradition? To answer this question, we conducted three experiments (total N = 2492) investigating how people reason about a particular class of offenses—proxy crimes—known to vary in their specificity and sensitivity in predicting actual crime. By manipulating the extent to which proxy crimes convict the innocent and acquit those guilty of a target offense, we uncovered evidence that attitudes toward proxy criminalization depend primarily on its propensity toward false positives, with false negatives exerting a substantially weaker effect. This tendency arose across multiple experimental conditions—whether we matched the rates of false positives and false negatives or their frequencies, whether information was presented visually or numerically, and whether decisions were made under time pressure or after a forced delay—and was unrelated to participants’ probability literacy or their professed views on the purpose of criminal punishment. Despite the observed inattentiveness to false negatives, when asked to justify their decisions, participants retrospectively supported their judgments by highlighting the proxy crime’s efficacy (or inefficacy) in combating crime. These results reveal a striking inconsistency: people favor criminal policies that protect the rights of the innocent, but report comparable concern for their expediency in fighting crime.

From the Discussion Section

Our results may bear on the debate between two broad camps that have dominated the theoretical landscape of criminal law. Consequentialists argue that new criminal offenses may be rightfully introduced as long as their benefits, primarily, their effectiveness in combating crime, outweigh their social costs. For example, the decision to approve a travel ban should rely on a calculus integrating both the ban’s capacity to hinder terrorist operations and intercept the terrorists themselves, as well as its detriment to well-meaning travelers. If the former exceeds the latter, there is reason to support the proxy crime—otherwise not (Teichman 2017).

In contrast, non-consequentialists advocate certain categorical constraints on the legitimate scope of criminalization—one of which is non-infringement on the rights of the innocent. From a non-consequentialist perspective, convicting the innocent violates a fundamental tenet of criminal law, and is therefore wrong even if doing so would come with enormous benefits for a law’s expediency—and, in turn, for social welfare. Specifically, negative retributivism is, roughly, the claim that the state has a categorical obligation not to punish innocents nor punish the guilty more than they deserve; but it does not have a similar moral obligation to punish all offenders (Bystranowski 2017; Hoskins and Duff, 2021).

Monday, April 10, 2023

Revealing the neurobiology underlying interpersonal neural synchronization with multimodal data fusion

Lotter, L. D., Kohl, S. H.,  et al. (2023).
Neuroscience & Biobehavioral Reviews,
146, 105042. 


Humans synchronize with one another to foster successful interactions. Here, we use a multimodal data fusion approach with the aim of elucidating the neurobiological mechanisms by which interpersonal neural synchronization (INS) occurs. Our meta-analysis of 22 functional magnetic resonance imaging and 69 near-infrared spectroscopy hyperscanning experiments (740 and 3721 subjects) revealed robust brain regional correlates of INS in the right temporoparietal junction and left ventral prefrontal cortex. Integrating this meta-analytic information with public databases, biobehavioral and brain-functional association analyses suggested that INS involves sensory-integrative hubs with functional connections to mentalizing and attention networks. On the molecular and genetic levels, we found INS to be associated with GABAergic neurotransmission and layer IV/V neuronal circuits, protracted developmental gene expression patterns, and disorders of neurodevelopment. Although limited by the indirect nature of phenotypic-molecular association analyses, our findings generate new testable hypotheses on the neurobiological basis of INS.


• When we interact, both our behavior and our neural activity synchronize.

• Neuroimaging meta-analysis and multimodal data fusion may reveal neural mechanisms.

• Robust involvement of right temporoparietal and left prefrontal brain regions.

• Associations to attention and mentalizing, GABA and layer IV/V neurotransmission.

• Brain-wide associated genes are enriched in neurodevelopmental disorders.


In recent years, synchronization of brain activities between interacting partners has been acknowledged as a central mechanism by which we foster successful social relationships as well as a potential factor involved in the pathogenesis of diverse neuropsychiatric disorders. Based on the results generated by our multimodal data fusion approach (see Fig. 5), we hypothesized that human INS is tightly linked to social attentional processing, subserved by the rTPJ as a sensory integration hub at the brain system level, and potentially facilitated by GABA-mediated E/I balance at the neurophysiological level.

Note: The interpersonal neural synchronization is a fascinating piece of research.  How to improve the synchronization may help with effective psychotherapy.

Sunday, April 9, 2023

Clarence Thomas Has Reportedly Been Accepting Gifts From Republican Megadonor Harlan Crow For Decades—And Never Disclosed It

Alison Durkee
Originally posted 6 APR 23

Supreme Court Justice Clarence Thomas has been accepting trips from Republican megadonor Harlan Crow for more than 20 years without disclosing them as required, ProPublica reports—including trips on private jets and yachts that could run afoul of the law—the latest in a series of ethical scandals the conservative justice has faced amid calls for justices to follow an ethics code.

Key Facts
  • Thomas has repeatedly used Crow’s private jet for travel and vacationed with him including on his superyacht and at Crow’s private resort in the Adirondacks, where guests stay for free, ProPublica reports, citing flight records, internal documents and interviews with Crow’s employees.
  • The justice has stayed at Crow’s resort “every summer for more than two decades,” according to ProPublica, and reportedly makes “regular use” of Crow’s private jet, including as recently as last year and for as short as a three-hour trip from Washington, D.C., to Connecticut in 2016.
  • While Supreme Court justices are not bound to the same code of ethics as lower federal court judges are, they do submit financial disclosures and are subject to laws that require disclosing gifts that are more than $415 in value, including any transportation that substitutes for commercial transport
  • Experts cited by ProPublica believe Thomas may have violated federal disclosure laws by not disclosing his yacht and jet travel, and that the stays at Crow’s resort may also have required disclosure because the resort is owned by Crow’s company rather than him personally.
  • Thomas’ stays at Crows’ resort also raise ethics concerns given the other guests Crow—a real estate magnate and Republican megadonor—has invited to the resort and on his yacht at the same time, which ProPublica reports include GOP donors, ​​executives at Verizon and PricewaterhouseCoopers, leaders from right-wing think tank American Enterprise Institute, Federalist Society leader Leonard Leo and Mark Paoletta, the general counsel for the Trump Administration’s Office of Management and Budget who now serves as Thomas’ wife’s attorney.

Saturday, April 8, 2023

Moral Appraisals Guide Intuitive Legal Determinations

Flanagan, B., de Almeida, G. F. C. F., et al (2021). 
SSRN Electronic Journal.



We sought to understand how basic competencies in moral reasoning influence the interpretation and application of private, legal, and institutional rules. 


We predicted that moral appraisals, implicating both outcome-based and mental state reasoning, would shape participants’ application of various rules and statutes—and asked whether these effects arise differentially under intuitive versus reflective reasoning conditions. 


In six vignette-based experiments (total N = 2502), participants considered a wide range of written rules and laws and were asked to decide whether a protagonist had violated the statute in question. We manipulated morally relevant aspects of each incident—including the valence of the statute’s purpose (Experiment 1) and of the outcomes that ensued (Experiments 2 and 3), as well as the protagonist’s accompanying mental state (Experiment 5). In two studies, we simultaneously varied whether participants decided under time pressure or following a forced delay (Experiments 4 and 6). 


Integrative moral appraisals of the rule’s purpose, the agent’s extraneous blameworthiness and their epistemic state impacted legal determinations, and helped to explain participants’ departure from rules’ literal interpretation. These counter- literal verdicts were stronger under time pressure and were weakened by the opportunity to reflect. 


Under intuitive reasoning conditions, legal determinations draw heavily on core competencies in moral cognition, such as outcome-based and mental state reasoning. In turn, cognitive reflection dampens these effects on statutory interpretation, giving rise to a broadly textualist response pattern.

Public Significance Statement

When deciding whether someone has violated a written rule or law, lay judges initially consult their moral instincts about the incident. In other words, the capacity for legal reasoning draws on our basic moral sense—a finding that resonates with theories of natural law. With enough time to reflect, they then draw closer to the letter of the law. This finding could help to explain a recurring observation: for ‘frontline’ decisions made under time constraints (e.g., while policing) to be contested in court after a more careful exercise in statutory interpretation.