Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, June 14, 2024

What does my group consider moral?: How social influence shapes moral expressions

del Rosario, K., Van Bavel, J. J., & West, T.
PsyArXiv (2024, May 8).


Although morality is often characterized as a set of stable values that are deeply held, we argue that moral expressions are highly malleable and sensitive to social norms. For instance, norms can either lead people to exaggerate their expressions of morality (such as on social media) or restrain them (such as in professional settings). In this paper, we discuss why moral expressions are subject to social influence by considering two goals that govern social influence: affiliation goals (the desire to affiliate with one’s group) and accuracy goals (the desire to be accurate in ambiguous situations). Different from other domains of social influence, we argue that moral expressions often satisfy both affiliation goals (“I want to fit in with the group”) and accuracy goals (“I want to do the right thing”). As such, the fundamental question governing moral expressions is: “what does my group consider moral?” We argue that this central consideration achieves both goals underlying social influence and drives moral expressions. We outline the ways in which social influence shapes moral expressions, from unconsciously copying others’ behavior to expressing outrage to gain status within the group. Finally, we describe when the same goals can result in different behaviors, highlighting how context-specific norms can encourage (or discourage) moral expressions. We explain how this framework will be helpful in understanding how identity, norms, and social contexts shape moral expressions.


Our review examines moral expressions through the lens of social influence, illustrating the critical role of the social environment in shaping moral expressions. Moral expressions serve a social purpose, such as affiliating with a group, and are influenced by various goals, including understanding the appropriate emotional response to moral issues and conforming to others' expressions to fit in. These influences become evident in different contexts, where norms either encourage exaggerated expressions, like on social media, or restraint, such as in professional settings. For this reason, different forms of influence can have vastly different implications. As such, the fundamental social question governing moral expressions for people in moral contexts is: “What does my group consider moral?” However, much of the morality literature does not account for the role of social influence in moral expressions. Thus, a social norms framework will be helpful in understanding how social contexts shape moral expression.

Here is a summary:

The research argues that moral expressions (outward displays of emotions related to right and wrong) are highly malleable and shaped by social norms and contexts, contrary to the view that morality reflects stable convictions. It draws from research on normative influence (conforming to gain social affiliation) and informational influence (seeking accuracy in ambiguous situations) to explain how moral expressions aim to satisfy both affiliation goals ("fitting in with the group") and accuracy goals ("doing the right thing").

The key points are:
  1. Moral expressions vary across contexts because people look to their social groups to determine what is considered moral behavior.
  2. Affiliation goals (fitting in) and accuracy goals (being correct) are intertwined for moral expressions, unlike in other domains where they are distinct.
  3. Social influence shapes moral expressions in various ways, from unconscious mimicry to outrage expressions for gaining group status.
  4. Context-specific norms can encourage or discourage moral expressions by prioritizing affiliation over accuracy goals, or vice versa.
  5. The motivation to be seen as moral contributes to the malleability of moral expressions across social contexts.

Thursday, June 13, 2024

Examining Potential Psychological Protective and Risk Factors for Stress and Burnout in Social Workers

Maddock, A.
Clin Soc Work J (2024).


Social work professionals experience high levels of stress and burnout. Stress and burnout can have a negative impact on the individual social worker, the organisations they work for, and perhaps most importantly, the quality of care that marginalised groups that are supported by social workers receive. Several work-related predictors of stress and burnout have been identified; however, no studies have examined the underlying psychological protective and risk factors which might help to explain changes in social worker stress and burnout. Using the clinically modified Buddhist psychological model (CBPM) as a theoretical framework, this cross-sectional study attempted to identify psychological protective and risk factors for stress and burnout in 121 social workers in Northern Ireland, using structural equation modelling, and conditional process analyses. This study provided promising preliminary evidence for a mediated effect CBPM as being a potentially useful explanatory framework of variation in social worker stress, emotional exhaustion, and depersonalisation. This study also provided evidence that several CBPM domains could have a direct effect on personal accomplishment. This study provides preliminary evidence that support programmes, which have the capacity to improve each CBPM domain (mindfulness, acceptance, attention regulation/decentering, self-compassion, non-attachment, and non-aversion) and reduce experiences of worry and rumination, are likely to support social workers to experience reduced stress, emotional exhaustion, depersonalisation of service users, and improvements in personal accomplishment.

From the Discussion

The aims of this paper were to provide more theoretical transparency on what some of the most important protective and risk factors for social worker stress and burnout are, using the data attained from social workers in Northern Ireland. To support our analysis, the CBPM (Maddock, 2023), which is a multi-faceted stress coping, cognitive and emotional regulation theory was used. Using structural equation modelling, though the direct and mediated effects CBPM was found to be an acceptable fit to the data on perceived stress, emotional exhaustion, and depersonalisation, our results indicate that the mediated effects CBPM model was a better fit to the data on each of these outcomes. Most of the significant conditional effects found using Process, between the CBPM domains and perceived stress, emotional exhaustion, depersonalisation were also mediated by either worry or rumination and sometimes both (e.g., stress), highlighting that negative thinking styles, such as worry and rumination, are likely to be a key risk factor for the development of stress and emotional exhaustion in social workers along with the depersonalisation of service users. This supports Kazdin (2009), who asserted that individual risk or protective factors (in our case, worry and rumination respectively) can impact multiple outcomes. This highlights how interventions e.g., MBPs or CBT, that aim to reduce feelings of stress, emotional exhaustion, and depersonalisation of service users in social work, could be more parsimonious, and effective, if they focussed on supporting social workers to regulate the extent to which they engage in worry or rumination in response to feelings of stress or burnout. This could be achieved, particularly by MBPs, through the development of each CBPM domain (i.e., mindfulness, attention regulation/decentering, acceptance, self-compassion, non-attachment and non-aversion), each of which have been identified as approach oriented coping strategies, which have been the capacity to support social workers to regulate the extent to which they worry or rumination (Maddock, 2023).

It is clear from this study that the effects of different potential psychological protective and risk factors for social worker stress and burnout, are likely to be complex. The limited literature available attempting to explain the patterns of relationships between mindfulness variables and mental health and well-being outcomes such as stress and burnout has usually identified either significant direct (e.g., Hölzel et al., 2011) or mediated (e.g., Gu et al., 2015) pathways, but not both at the same time. This study thus highlights the potentially complex direct and mediated interactions between mindfulness variables e.g., acceptance, attention regulation, stress, and different domains of burnout in social work. This is supported by the fact that most of the significant effects of each CBPM domain on stress, burnout-emotional exhaustion, burnout-depersonalisation, and burnout-personal accomplishment were found to be mediated by either worry or rumination. A number of CBPM domains e.g., acceptance and attention regulation/decentering also appeared to have a direct effect on stress and burnout-depersonalisation. These findings also support Kazdin (2009) who highlighted that outcomes, such as stress and depersonalisation, can be reduced through multiple pathways i.e., through both direct and mediated relationships.

Wednesday, June 12, 2024

The health care workforce crisis is already here

Caitlin Owen
Originally posted 7 June 24

Demoralized doctors and nurses are leaving the field, hospitals are sounding the alarm about workforce shortages and employees are increasingly unionizing and even going on strike in high-profile disputes with their employers.

Why it matters: Dire forecasts of health care worker shortages often look to a decade or more from now, but the pandemic — and its ongoing fallout — has already ushered in a volatile era of dissatisfied workers and understaffed health care facilities.
  • Some workers and experts say understaffing is, in some cases, the result of intentional cost cutting. Regardless, patients' access to care and the quality of that care are at risk.
  • "There are 83 million Americans today who don't have access to primary care," said Jesse Ehrenfeld, president of the American Medical Association. "The problem is here. It's acute in rural parts of the country, it's acute in underserved communities."
The big picture: Complaints about understaffing, administrative burdens and inadequate wages aren't new, but they are getting much louder — and more health workers are leaving their jobs or cutting back their hours.

Here are some thoughts:

The news of the healthcare workforce crisis being "already here" is deeply concerning.  It's not just about future projections; it's about the impact on patient care, provider well-being, and the ethical obligations we all share.

Providers will likely walk an ethical tightrope, that will likely have negative consequences. Imagine a doctor facing a packed waiting room, knowing some patients won't receive the time and attention they deserve.  This is the reality for many providers stretched thin by staffing shortages. It creates an ethical tightrope: how to deliver quality care amidst overwhelming pressure.  Burnout, compassion fatigue, and even medical errors become more likely.  This is likely the starting point for the possibility of moral distress and/or moral injury.

The crisis isn't just a burden on healthcare providers or institutions. It's a societal challenge.  Policymakers, educators, and even patients themself can play a role.

This isn't about pointing fingers; it's about recognizing a shared responsibility.  By working together, we can ensure a healthcare system that is ethical, sustainable, and provides quality care for all.

Tuesday, June 11, 2024

Morals Versus Ethics: Building An Organizational Culture Of Trust And Transparency

Pamela Furr
Originally posted 6 May 24

Here are two excerpts:

Prioritize Transparency And Integrity

Our team is a diverse mix of ages, cultures, races and backgrounds, and we all bring unique experiences and perspectives to the table. If a colleague says or does something that doesn’t sit right with you, take a moment to pause, process and then approach them. Share how you felt in the moment—this can be as simple as saying, “My feelings were hurt when you did that” or “I didn’t think the language you used earlier was appropriate.” Give them the opportunity to explain or apologize before gossiping with coworkers or silently holding onto resentments. Trust each other to have open, honest conversations, and you can often defuse conflicts before they escalate.


Build A Sense Of Community

Set the tone for open dialogue and mutual respect in your organization. By modeling these values in your interactions with others, you can inspire your team to uphold the same standards. Foster a culture in which you advocate for yourself and others and try to learn from others as well. Approach things you don’t understand with a spirit of curiosity and compassion, assuming positive intent until proven otherwise. Ask questions, and truly seek to understand someone else’s point of view.

I believe that an essential part of being a leader is ensuring that our employees feel safe, protected and heard when they come to work. We can work to hold external governing boards accountable to the standards they set, but we can also do everything in our power to create a culture of trust, transparency and accountability within our own organizations.

Here is my summary:

The article discusses the difference between morals and ethics. Morals are personal beliefs and values that guide our actions, while ethics are a set of rules established by a community or governing body.

The author describes a situation where a trainee made a false sexual harassment claim against her mentor. The certifying board refused to take any action because they saw it as an employment contract issue. The author argues that governing boards should take a stronger stance in upholding ethics within their professions.

The article concludes with the author's thoughts on creating an ethical and transparent workplace culture. The author emphasizes the importance of open communication, understanding policies and procedures, and building a sense of community. By following these principles, organizations can create a safe and supportive environment for their employees.

Monday, June 10, 2024

Attributions toward artificial agents in a modified Moral Turing Test

Aharoni, E., Fernandes, S., Brady, D.J. et al.
Sci Rep 14, 8458 (2024).


Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.

Here is my summary:

The researchers conducted a modified Moral Turing Test (m-MTT) to investigate if people view moral evaluations by advanced AI systems similarly to those by humans. They had participants rate the quality of moral reasoning from the AI language model GPT-4 and from humans, while initially blinded to the source.

Key Findings
  • Remarkably, participants rated GPT-4's moral reasoning as superior in quality to humans' across dimensions like virtuousness, intelligence, and trustworthiness. This is consistent with passing the "comparative MTT" proposed previously.
  • When later asked to identify if the moral evaluations came from a human or computer, participants performed above chance levels.
  • However, GPT-4 did not definitively "pass" this test, potentially because its perceived superiority made it identifiable as AI.

Sunday, June 9, 2024

Artificial Intelligence Feedback on Physician Notes Improves Patient Care

NYU Langone Health
Research, Innovation
Originally posted 17 APR 24

Artificial intelligence (AI) feedback improved the quality of physician notes written during patient visits, with better documentation improving the ability of care teams to make diagnoses and plan for patients’ future needs, a new study finds.

Since 2021, NYU Langone Health has been using pattern-recognizing, machine-learning AI systems to grade the quality of doctors’ clinical notes. At the same time, NYU Langone created data informatics dashboards that monitor hundreds of measures of safety and the effectiveness of care. The informatics team over time trained the AI models to track in dashboards how well doctors’ notes achieved the “5 Cs”: completeness, conciseness, contingency planning, correctness, and clinical assessment.

Now, a new case study, published online April 17 in NEJM Catalyst Innovations in Care Delivery, shows how notes improved by AI, in combination with dashboard innovations and other safety initiatives, resulted in an improvement in care quality across four major medical specialties: internal medicine, pediatrics, general surgery, and the intensive care unit.

This includes improvements across the specialties of up to 45 percent in note-based clinical assessments (that is, determining diagnoses) and reasoning (making predictions when diagnoses are unknown). In addition, contingency planning to address patients’ future needs saw improvements of up to 34 percent.

Last year, NYU Langone added to this long-standing effort a newer form of AI that develops likely options for the next word in any sentence based on how billions of people used language on the internet over time. A result of this next-word prediction is that generative AI chatbots like GPT-4 can read physician notes and make suggestions. In a pilot within the case study, the research team supercharged their machine-learning AI model, which can only give physicians a grade on their notes, by integrating a chatbot that added an accurate written narrative of issues with any note.

The article is linked above.  Here is the abstract:


Electronic health records have become an integral part of modern health care, but their implementation has led to unintended consequences, such as poor note quality. This case study explores how NYU Langone Health leveraged artificial intelligence (AI) to address the challenge to improve the content and quality of medical documentation. By quickly and accurately analyzing large volumes of clinical documentation and providing feedback to organizational leadership and individually to providers, AI can help support a culture of continuous note quality improvement, allowing organizations to enhance a critical component of patient care.

Saturday, June 8, 2024

A Doctor at Cigna Said Her Bosses Pressured Her to Review Patients’ Cases Too Quickly

P. Rucker and D. Armstrong
Originally posted 29 APR 24

Here is an excerpt:

As ProPublica and The Capitol Forum reported last year, Cigna built a computer program that allowed its medical directors to deny certain claims in bulk. The insurer’s doctors spent an average of just 1.2 seconds on each of those cases. Cigna at the time said the review system was created to speed up approval of claims for certain routine screenings; the company later posted a rebuttal to the story. A congressional committee and the Department of Labor launched inquiries into this Cigna program. A spokesperson for Rep. Cathy McMorris Rodgers, the chair of the congressional committee, said Rodgers continues to monitor the situation after Cigna shared some details about its process. The Labor Department is still examining such practices.

One figure on Cigna’s January and February 2022 dashboards was like a productivity score; the news organizations found that this number reflects the pace at which a medical director clears cases.

Cigna said it was incorrect to call that figure on its dashboard a productivity score and said its “view on productivity is defined by a range of factors beyond elements included in a single spreadsheet.” In addition, the company told the news organizations, “The copy of the dashboard that you have is inaccurate and secondary calculations made using its contents may also be inaccurate.” The news organizations asked what was inaccurate, but the company wouldn’t elaborate.

Nevertheless, Cigna said that because the dashboard created “inadvertent confusion” the company was “reassessing its use.”

Here is my summary:

The article reports on Dr. Debby Day, who alleges that Cigna, her employer, pressured her to prioritize speed over thoroughness when reviewing patients' requests for healthcare coverage.

According to Day, managers emphasized meeting quotas and processing claims quickly, even if it meant superficially reviewing cases. Dr. Day said Cigna expected medical directors to review cases in as little as 4 minutes, which she felt was too rushed to properly evaluate them.  The pressure to deny claims quickly was nicknamed "click and close" by some employees.

Day felt this practice compromised patient care and refused to expedite reviews at the expense of quality. The article suggests this may have led to threats of termination from Cigna.

Friday, June 7, 2024

Large Language Models as Moral Experts? GPT-4o Outperforms Expert Ethicist in Providing Moral Guidance

Dillion, D., Mondal, D., Tandon, N.,
& Gray, K. (2024, May 29).


AI has demonstrated expertise across various fields, but its potential as a moral expert remains unclear. Recent work suggests that Large Language Models (LLMs) can reflect moral judgments with high accuracy. But as LLMs are increasingly used in complex decision-making roles, true moral expertise requires not just aligned judgments but also clear and trustworthy moral reasoning. Here, we advance work on the Moral Turing Test and find that advice from GPT-4o is rated as more moral, trustworthy, thoughtful, and correct than that of the popular The New York Times advice column, The Ethicist. GPT models outperformed both a representative sample of Americans and a renowned ethicist in providing moral explanations and advice, suggesting that LLMs have, in some respects, achieved a level of moral expertise. The present work highlights the importance of carefully programming ethical guidelines in LLMs, considering their potential to sway users' moral reasoning. More promisingly, it suggests that LLMs could complement human expertise in moral guidance and decision-making.

Here are my thoughts:

This research on GPT-4o's moral reasoning is fascinating, but caution is warranted. While exceeding human performance in explanations and perceived trustworthiness is impressive, true moral expertise goes beyond these initial results.

Here's why:

First, there are nuances to all moral dilemmas. Real-world dilemmas often lack clear-cut answers. Can GPT-4o navigate the gray areas and complexities of human experience?

Next, everyone has a rich experience, values, perspectives, and biases.  What ethical framework guides GPT-4o's decisions? Transparency in its programming is crucial.

Finally, the consequences of AI-driven moral advice can be far-reaching. Careful evaluation of potential biases and unintended outcomes is essential.  There is no objective algorithm.  There is no objective morality.  All moral decisions, no matter how well-reasoned, have pluses and minuses.  Therefore, AI can be used as a starting point for decision-making and planning.

Thursday, June 6, 2024

The Ethics of Advanced AI Assistants

Gabriel, I., Manzini, A., et al. (2024).
Google Deep Mind

This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user – across one or more domains – in line with the user’s expectations. The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next consider the relationship between advanced AI assistants and individual users in more detail, exploring topics such as manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants. Finally, we conclude by providing a range of recommendations for researchers, developers, policymakers and public stakeholders.

Our analysis suggests that advanced AI assistants are likely to have a profound impact on our individual and collective lives. To be beneficial and value-aligned, we argue that assistants must be appropriately responsive to the competing claims and needs of users, developers and society. Features such as increased agency, the capacity to interact in natural language and high degrees of personalisation could make AI assistants especially helpful to users. However, these features also make people vulnerable to inappropriate influence by the technology, so robust safeguards are needed. Moreover, when AI assistants are deployed at scale, knock-on effects that arise from interaction between them and questions about their overall impact on wider institutions and social processes rise to the fore. These dynamics likely require technical and policy interventions in order to foster beneficial cooperation and to achieve broad, inclusive and equitable outcomes. Finally, given that the current landscape of AI evaluation focuses primarily on the technical components of AI systems, it is important to invest in the holistic sociotechnical evaluations of AI assistants, including human–AI interaction, multi-agent and societal level research, to support responsible decision-making and deployment in this domain.

Here are some summary thoughts:

The development of increasingly advanced AI assistants represents a significant technological shift, moving beyond narrow AI for specific tasks to general-purpose foundation models that enable greater autonomy and scope.

These advanced AI assistants can provide novel services (like summarization, ideation, planning, and tool use), with the potential to become deeply integrated into our economic, social, and personal lives.

Ethical and Societal Implications

Profound Impact Potential: AI assistants could radically alter work, education, creativity, communication, and how we make decisions about our lives and goals.

Safety, Alignment, and Misuse: The autonomy of AI assistants presents challenges around safety, ensuring alignment with user intentions, and potential for misuse.

Human-Assistant Interactions: Issues around trust, privacy, anthropomorphism, and the moral limits of personalization need to be considered.

Social Impacts: AI assistants could affect the distribution of benefits and burdens in society, as well as how humans cooperate and coordinate.

Evaluation Challenges: New methodologies are needed to evaluate AI assistants as part of a broader sociotechnical system, beyond just model performance.

Responsible Development: Ongoing research, policy work, and public discussion are required to address the novel normative and technical challenges posed by advanced AI assistants.

Concluding Thoughts

The development of advanced AI assistants represents a transformative technological shift, and the choices we make now will shape their future path. Coordinated efforts across researchers, developers, policymakers, and the public are needed to ensure these assistants are developed responsibly and in the public interest.

Wednesday, June 5, 2024

Evangelical literary tradition and moral foundations theory

Christopher Douglas
The Journal of American Culture
Originally published 26 Feb 24

Here is an excerpt:

What can MFT tell us about the topography of evangelical ethics as displayed in its bestselling fiction of the last 20 years? In many ways, there is nothing surprising in these findings. As Haidt himself suggests, the five primary foundations discernably track onto political orientations, with conservatives balancing all five criteria but liberals prioritizing care and fairness (as equality): “it's not just members of traditional societies who draw on all five foundations; even within Western societies, we consistently find an ideological effect in which religious and cultural conservatives value and rely upon all five foundations, whereas liberals value and rely upon the harm and fairness foundations primarily” (Haidt, 2007, 1001). Or, in updated form: “Liberals have a three-foundation morality, whereas conservatives use all six” (Haidt, 2012, 214). The Shack seems to aptly confirm this insight, prioritizing care, fairness-as-justice, and egalitarianism at the expense of loyalty, authority, and purity. These values reflect the author's liberal sensibilities that were suggested when Young tweeted criticism of Donald Trump after the Access Hollywood tapes were released (Douglas, 2020, 508n3). LaHaye's conservative credentials, meanwhile, are well known—early partner to Jerry Falwell in the formation of the Moral Majority, fundraiser for the Institute for Creation Research, and so on—and the Left Behind series suggests a mix of moral foundations that does not so much find a balance among all six foundations (as Haidt discovered seems to be true of “Very Conservatives”) as express a sort of Extremely Conservative sensibility. The Shack and the Left Behind series reflect the considerable range of white evangelical politics, but also reflect the fact that white evangelicals tilt heavily conservative, forming the most important demographic of the Republican base, voting for Donald Trump by 77 and 84% in 2016 and 2020, respectively (Igielnik et al., 2021).

Here is my summary:

The article explores the moral foundations of two evangelical best-selling novels: The Shack by William Paul Young and Left Behind by Tim LaHaye and Jerry Jenkins. It uses Moral Foundations Theory (MFT) to analyze how these seemingly very different novels prioritize different moral values.

Moral Foundations Theory (MFT) identifies five core moral foundations:
  • Care/Harm: Protecting others from harm and promoting their well-being.
  • Fairness/Cheating: Ensuring that people are treated justly and receive what they deserve.
  • Loyalty/Betrayal: Standing by your group and upholding your commitments.
  • Authority/Subversion: Respecting legitimate authority figures and hierarchies.
  • Sanctity/Degradation: Purity, avoiding disgust and respecting the sacred.
The Shack by William Paul Young grapples with the kidnapping, abuse, and murder of a child. It focuses on the themes of care/harm and fairness. The protagonist, Mack, wrestles with how God could allow such a tragedy to occur and how fairness can be achieved. The novel explores the idea of forgiveness and reconciliation.

Left Behind by Tim LaHaye and Jerry Jenkins is a series about the Rapture and the End Times. It emphasizes the moral foundations of loyalty/betrayal, authority/subversion, and sanctity/degradation. The series depicts a world where good and evil are clearly defined and a battle between God and the Antichrist is about to unfold. The in-group of Christians is loyal to God and resists the authority of the Antichrist. The series emphasizes the importance of following God's will and upholding Christian values.

The article argues that MFT helps explain the enduring appeal of these novels.  The Shack resonates with readers who seek comfort and answers in the face of tragedy. Left Behind appeals to readers who feel like they are part of an embattled community and who believe in a clear distinction between good and evil.

Tuesday, June 4, 2024

Responding Effectively to Disruptive Patient Behaviors: Beyond Behavior Contracts

Fabi R, Johnson LSM. 
JAMA. 2024;331(10):823–824.

Here is an excerpt:

The epidemic of workplace violence has prompted the use of harsh responses that include “behavior contracts” (sometimes called “behavioral agreements”) that can undermine a hospital’s commitment to providing evidence-based, patient-centered care. There is no national repository of data on the use of behavior contracts, or on hospital policies, but in our experience as clinical ethics consultants, and through discussions with colleagues nationally, we have observed that hospitals increasingly try to manage so-called difficult patients and families through behavior contracts that impose paternalistic limits and punitive consequences on patients for a wide range of behaviors. Yet behavior contracts pose serious ethical challenges, especially when unilaterally imposed on patients whose behavior is upsetting and disrespectful but not unsafe. Moreover, the evidence supporting the efficacy of contracts is lacking.

Behavior contracts are used in a variety of health care contexts to promote patient adherence with treatment, including smoking cessation, weight loss, substance use disorder rehabilitation, and psychiatric treatment. A Cochrane systematic review found that evidence of their effectiveness at improving adherence is limited and mixed; it did not find evidence from randomized clinical trials outside of this context.1 Indeed, we could find no empirical evidence to support or challenge the effectiveness of behavior contracts as a tool for addressing the problems of undesirable patient or family behaviors, patient-staff conflicts, and workplace violence in health care. Absent such evidence, health care institutions committed to evidence-based medicine and workplace safety might hesitate before using these contracts. When viewed alongside the ethical considerations, which have been extensively explored in the bioethics literature, we argue that the lack of supportive evidence generates an ethical imperative to reconsider their use altogether. Such reconsideration should include internal audits of how and when they are used, address the lack of institutional transparency and accountability about their use, and impose consistency and ethical safeguards. Based on our own experience, and that of many colleagues, we suspect that institutions that engage in this kind of self-reflection will find worrisome disparities in their use of behavior contracts.

Quick summary:

The article discusses strategies for responding effectively to disruptive patient behaviors beyond behavior contracts. It emphasizes the importance of recognizing risks, de-escalating situations, and maintaining safety in healthcare settings. Key points include the impact of disruptive behavior on patient safety, the need for de-escalation techniques, and the significance of understanding triggers to prevent disruptive incidents. The article also highlights the role of training, policies, and protocols in managing disruptive behaviors successfully.

Monday, June 3, 2024

Morality in the Anthropocene: The Perversion of Compassion and Punishment in the Online World

Robertson, C., Shariff, A., & Van Bavel, J. J.
(2024, February 4).


Although much of human morality evolved in an environment of small group living, almost six billion people use the internet in the modern era. We argue that the technological transformation has created an entirely new ecosystem that is often mismatched with our evolved adaptations for social living. We discuss how evolved responses to moral transgressions, such as compassion for victims of transgressions and punishment of transgressors, are disrupted by two main features of the online context. First, the scale of the internet exposes us to an unnaturally large quantity of extreme moral content, causing compassion fatigue and increasing public shaming. Second, the physical and psychological distance between moral actors online can lead to ineffective collective action and virtue signaling. We discuss practical implications of these mismatches and suggest directions for future research on morality in the internet era.

Significance Statement

Morality evolved when people lived in small, close-knit groups. Evolved responses to moral conflict, like compassion for the victim and punishment for the transgressor, had adaptive benefits. However, the internet has created a new ecosystem for human sociality changing morality in two important ways. First, the scale of the internet exposes people to unnaturally large quantities of extreme moral content. Second, people’s responses to moral transgressions are not beneficial in large, distal social groups. These mismatches can lead to compassion fatigue, ineffective collective action, public shaming, and virtue signaling.

Here is my summary:

The research discusses how the internet has transformed human morality by creating a new ecosystem that often conflicts with our evolved social adaptations. It highlights that the scale and nature of online interactions lead to compassion fatigue, public shaming, and ineffective collective action. The research emphasizes how evolved responses to moral conflicts, like compassion for victims and punishment for transgressors, are disrupted online due to the vast exposure to extreme moral content and the distance between moral actors. It delves into the evolutionary underpinnings of moral cognition, explaining how humans' innate behaviors related to social interactions have shaped morality. Furthermore, it explores how the internet exposes people to overabundance and extremity of moral content, triggering maladaptive responses like heightened outrage and hostility. The research also examines how online environments distort people's prosocial reactions to morality, leading to challenges in expressing genuine compassion, empathy, and effective third-party punishment.

Sunday, June 2, 2024

The Honest Broker versus the Epistocrat: Attenuating Distrust in Science by Disentangling Science from Politics

Senja Post & Nils Bienzeisler (2024)
Political Communication
DOI: 10.1080/10584609.2024.2317274


People’s trust in science is generally high. Yet in public policy disputes invoking scientific issues, people’s trust in science is typically polarized, aligned with their political preferences. Theorists of science and democracy have reasoned that a polarization of trust in scientific information could be mitigated by clearly disentangling scientific claims from political ones. We tested this proposition experimentally in three German public policy disputes: a) school closures versus openings during the COVID-19 pandemic, b) a ban on versus a continuation of domestic air traffic in view of climate change, and c) the shooting of wolves in residential areas or their protection. In each case study, we exposed participants to one of four versions of a news item citing a scientist reporting their research and giving policy advice. The scientist’s quotes differed with regard to the direction and style of their policy advice. As an epistocrat, the scientist blurs the distinction between scientific and political claims, purporting to “prove” a policy and thereby precluding a societal debate over values and policy priorities. As an honest broker, the scientist distinguishes between scientific and political claims, presenting a policy option while acknowledging the limitations of their disciplinary scientific perspective of a broader societal problem. We find that public policy advice in the style of an honest broker versus that of an epistocrat can attenuate political polarization of trust in scientists and scientific findings by enhancing trust primarily among the most politically challenged.

Here is a summary:

This article dives into the issue of distrust in science and proposes a solution: scientists acting as "honest brokers".

The article contrasts two approaches scientists can take when communicating scientific findings for policy purposes.  An "epistocrat" scientist blurs the lines between science and politics. They present a specific policy recommendation based on their research, implying that this is the only logical course of action. This doesn't acknowledge the role of values and priorities in policy decisions, and can shut down public debate.

On the other hand, an "honest broker" scientist makes a clear distinction between science and politics. They present their research findings and the policy options that stem from them, but acknowledge the limitations of science in addressing broader societal issues. This approach allows for a public discussion about values and priorities, which can help build trust in science especially among those who might not agree with the scientist's political views.

The article suggests that by following the "honest broker" approach, scientists can help reduce the political polarization of trust in science. This means presenting the science clearly and openly, and allowing for a public conversation about how those findings should be applied.

Saturday, June 1, 2024

Political ideology and environmentalism impair logical reasoning

Keller, L., Hazelaar, F., et al. (2023).
Thinking & Reasoning, 1–30.


People are more likely to think statements are valid when they agree with them than when they do not. We conducted four studies analyzing the interference of self-reported ideologies with performance in a syllogistic reasoning task. Study 1 established the task paradigm and demonstrated that participants’ political ideology affects syllogistic reasoning for syllogisms with political content but not politically irrelevant syllogisms. The preregistered Study 2 replicated the effect and showed that incentivizing accuracy did not alleviate these differences. Study 3 revealed that syllogistic reasoning is affected by ideology in the presence and absence of such bonus payments for correctly judging the conclusions’ logical validity. In Study 4, we observed similar effects regarding a different ideological orientation: environmentalism. Again, monetary bonuses did not attenuate these effects. Taken together, the results of four studies highlight the harm of ideology regarding people’s logical reasoning.

Here is my summary:

The research explores how pre-existing ideologies, both political and environmental, can influence how people evaluate logical arguments.  The findings suggest that people are more likely to judge arguments as valid if they align with their existing beliefs, regardless of the argument's actual logical structure. This bias was observed for both liberals and conservatives, and for those with strong environmental convictions. Offering financial rewards for accurate reasoning didn't eliminate this effect.