Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Morality. Show all posts
Showing posts with label Morality. Show all posts

Saturday, March 2, 2024

Unraveling the Mindset of Victimhood

Scott Barry Kaufman
Scientific American
Originally posted 29 June 2020

Here is an excerpt:

Constantly seeking recognition of one’s victimhood. Those who score high on this dimension have a perpetual need to have their suffering acknowledged. In general, this is a normal psychological response to trauma. Experiencing trauma tends to “shatter our assumptions” about the world as a just and moral place. Recognition of one’s victimhood is a normal response to trauma and can help reestablish a person’s confidence in their perception of the world as a fair and just place to live.

Also, it is normal for victims to want the perpetrators to take responsibility for their wrongdoing and to express feelings of guilt. Studies conducted on testimonies of patients and therapists have found that validation of the trauma is important for therapeutic recovery from trauma and victimization (see here and here).

A sense of moral elitism. Those who score high on this dimension perceive themselves as having an immaculate morality and view everyone else as being immoral. Moral elitism can be used to control others by accusing others of being immoral, unfair or selfish, while seeing oneself as supremely moral and ethical.

Moral elitism often develops as a defense mechanism against deeply painful emotions and as a way to maintain a positive self-image. As a result, those under distress tend to deny their own aggressiveness and destructive impulses and project them onto others. The “other” is perceived as threatening whereas the self is perceived as persecuted, vulnerable and morally superior.


Here is a summary:

Kaufman explores the concept of "interpersonal victimhood," a tendency to view oneself as the repeated target of unfair treatment by others. He identifies several key characteristics of this mindset, including:
  • Belief in inherent unfairness: The conviction that the world is fundamentally unjust and that one is disproportionately likely to experience harm.
  • Moral self-righteousness: The perception of oneself as more ethical and deserving of good treatment compared to others.
  • Rumination on past injustices: Dwelling on and replaying negative experiences, often with feelings of anger and resentment.
  • Difficulty taking responsibility: Attributing negative outcomes to external factors rather than acknowledging one's own role.
Kaufman argues that while acknowledging genuine injustices is important, clinging to a victimhood identity can be detrimental. It can hinder personal growth, strain relationships, and fuel negativity. He emphasizes the importance of developing a more balanced perspective, acknowledging both external challenges and personal agency. The article offers strategies for fostering resilience

Friday, February 23, 2024

How Did Polyamory Become So Popular?

Jennifer Wilson
The New Yorker
Originally posted 25 Dec 23

Here is an excerpt:

What are all these open couples, throuples, and polycules suddenly doing in the culture, besides one another? To some extent, art is catching up with life. Fifty-one per cent of adults younger than thirty told Pew Research, in 2023, that open marriage was “acceptable,” and twenty per cent of all Americans report experimenting with some form of non-monogamy. The extramarital “entanglements” of Will and Jada Pinkett Smith have been tabloid fodder for the past two years. (Pinkett Smith once clarified that their marriage is not “open”; rather, it is a “relationship of transparency.”) In 2020, the reality show “House Hunters,” on HGTV, saw a throuple trying to find their dream home—one with a triple-sink vanity. The same year, the city of Somerville, Massachusetts, allowed domestic partnerships to be made up of “two or more” people.

Some, like the sex therapist (and author of “Open Monogamy, A Guide to Co-Creating Your Ideal Relationship Agreement,” 2021), Tammy Nelson, have attributed the acceptance of a greater number of partners to pandemic-born domestic ennui; after being stuck with one person all day every day, the thinking goes, couples are ready to open up more than their pods. Nelson is part of a cohort of therapists, counsellors, and advice writers, including Esther Perel and the “Savage Love” columnist Dan Savage, who are encouraging married couples to think more flexibly about monogamy. Their advice has found an eager audience among the well-heeled attendees of the “ideas festival” circuit, featured in talks at Google, SXSW, and the Aspen Institute.

The new monogamy skepticism of the moneyed gets some screen time in the pandemic-era breakout hit “The White Lotus.” The show mocks the leisure class as they mope around five-star resorts in Hawaii and Sicily, stewing over love, money, and the impossibility, for people in their tax bracket, of separating the two. In the latest season, Ethan (Will Sharpe) and Harper (Aubrey Plaza) are an attractive young couple stuck in a sexless marriage—until, that is, they go on vacation with the monogamish Cameron (Theo James) and Daphne (Meghann Fahy). After Cameron and Harper have some unaccounted-for time together in a hotel room, Ethan tracks down an unbothered Daphne, lounging on the beach, to share his suspicion that something has happened between their spouses. Some momentary concern on Daphne’s face quickly morphs—in a devastatingly subtle performance by Fahy—into a sly smile. “A little mystery? It’s kinda sexy,” she assures Ethan, before luring him into a seaside cove. That night Ethan and Harper have sex, the wounds of their marriage having been healed by a little something on the side.


Here is my summary:

The article discusses the increasing portrayal and acceptance of non-monogamous relationships in contemporary culture, particularly in literature, cinema, and television. It notes that open relationships, throuples, and polyamorous arrangements are gaining prominence, reflecting changing societal attitudes. The author cites statistics and cultural examples, including a Gucci perfume ad and a plot twist in the TV series "Riverdale." The rise of non-monogamy is linked to a broader shift in societal norms, with some attributing it to pandemic-related ennui and a desire for more flexibility in relationships. The text also delves into the historical roots of polyamory, mentioning the Kerista movement and its adaptation to conservative times in the 1980s. The author concludes by expressing a desire for a more inclusive and equitable representation of polyamory, critiquing the limited perspective presented in a specific memoir discussed in the text.

Saturday, February 17, 2024

What Stops People From Standing Up for What’s Right?

Julie Sasse
Greater Good
Originally published 17 Jan 24

Here is an excerpt:

How can we foster moral courage?

Every person can try to become more morally courageous. However, it does not have to be a solitary effort. Instead, institutions such as schools, companies, or social media platforms play a significant role. So, what are concrete recommendations to foster moral courage?
  • Establish and strengthen social and moral norms: With a solid understanding of what we consider right and wrong, it becomes easier to detect wrongdoings. Institutions can facilitate this process by identifying and modeling fundamental values. For example, norms and values expressed by teachers can be important points of reference for children and young adults.
  • Overcome uncertainty: If it is unclear whether someone’s behavior is wrong, witnesses should feel comfortable to inquire, for example, by asking other bystanders how they judge the situation or a potential victim whether they are all right.
  • Contextualize anger: In the face of wrongdoings, anger should not be suppressed since it can provide motivational fuel for intervention. Conversely, if someone expresses anger, it should not be diminished as irrational but considered a response to something unjust. 
  • Provide and advertise reporting systems: By providing reporting systems, institutions relieve witnesses from the burden of selecting and evaluating individual means of intervention and reduce the need for direct confrontation.
  • Show social support: If witnesses directly confront a perpetrator, others should be motivated to support them to reduce risks.
We see that there are several ways to make moral courage less difficult, but they do require effort from individuals and institutions. Why is that effort worth it? Because if more individuals are willing and able to show moral courage, more wrongdoings would be addressed and rectified—and that could help us to become a more responsible and just society.


Main points:
  • Moral courage is the willingness to stand up for what's right despite potential risks.
  • It's rare because of various factors like complexity of the internal process, situational barriers, and difficulty seeing the long-term benefits.
  • Key stages involve noticing a wrongdoing, interpreting it as wrong, feeling responsible, believing in your ability to intervene, and accepting potential risks.
  • Personality traits and situational factors influence these stages.

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Sunday, January 14, 2024

Google is Free: Moral Evaluations of Intergroup Curiosity

Mosley, A. J., & Solomon, L. H. (2023).
Personality and Social Psychology Bulletin, 0(0).

Abstract

Two experiments investigated how evaluations of intergroup curiosity differed depending on whether people placed responsibility for their learning on themselves or on outgroup members. In Study 1, participants (n = 340; 51% White-American, 49% Black-American) evaluated White actors who were curious about Black culture and placed responsibility on outgroup members to teach versus on themselves to learn. Both Black and White participants rated the latter actors as more moral, and perceptions of effort mediated this effect. A follow-up preregistered study (n = 513; 75% White-American) asked whether perceptions of greater effort cause greater perceptions of moral goodness. Replicating Study 1, participants rated actors as more moral when they placed responsibility on themselves versus others. Participants also rated actors as more moral when they exerted high versus low effort. These results clarify when and why participants view curiosity as morally good and help to strengthen bridges between work on curiosity, moral cognition, and intergroup relations.


Here is my summary:

The researchers found that people evaluate intergroup curiosity more favorably when they perceive that the curious individual is placing responsibility on themselves to learn rather than on the outgroup to teach. The researchers also found that perceptions of effort mediate this effect, such that people view curious individuals who exert greater effort as more moral. These findings suggest that people view intergroup curiosity as more morally good when they perceive that the curious individual is taking responsibility for their own learning and is putting in the effort to understand the outgroup.

Thursday, January 11, 2024

The paucity of morality in everyday talk

Atari, M., Mehl, M.R., Graham, J. et al. 
Sci Rep 13, 5967 (2023).

Abstract

Given its centrality in scholarly and popular discourse, morality should be expected to figure prominently in everyday talk. We test this expectation by examining the frequency of moral content in three contexts, using three methods: (a) Participants’ subjective frequency estimates (N = 581); (b) Human content analysis of unobtrusively recorded in-person interactions (N = 542 participants; n = 50,961 observations); and (c) Computational content analysis of Facebook posts (N = 3822 participants; n = 111,886 observations). In their self-reports, participants estimated that 21.5% of their interactions touched on morality (Study 1), but objectively, only 4.7% of recorded conversational samples (Study 2) and 2.2% of Facebook posts (Study 3) contained moral content. Collectively, these findings suggest that morality may be far less prominent in everyday life than scholarly and popular discourse, and laypeople, presume.

Summary

Overall, the findings of this research suggest that morality is far less prevalent in everyday talk than previously assumed. While participants overestimated the frequency of moral content in their self-reports, objective measures revealed that moral topics are relatively rare in everyday conversations and online interactions.

The study's authors propose several explanations for this discrepancy between subjective and objective findings. One possibility is that people tend to remember instances of moral talk more vividly than other types of conversation. Additionally, people may be more likely to report that they engage in moral talk when they are explicitly asked about it, as this may make them more aware of their own moral values.

Regardless of the underlying reasons, the findings of this research suggest that morality is not as prominent in everyday life as is often assumed. This may have implications for how we understand and promote moral behavior in society.

Saturday, December 16, 2023

Older people are perceived as more moral than younger people: data from seven culturally diverse countries

Piotr Sorokowski, et al. (2023)
Ethics & Behavior,
DOI: 10.1080/10508422.2023.2248327

Abstract

Given the adage “older and wiser,” it seems justified to assume that older people may be stereotyped as more moral than younger people. We aimed to study whether assessments of a person’s morality differ depending on their age. We asked 661 individuals from seven societies (Australians, Britons, Burusho of Pakistan, Canadians, Dani of Papua, New Zealanders, and Poles) whether younger (~20-year-old), middle-aged (~40-year-old), or older (~60-year-old) people were more likely to behave morally and have a sense of right and wrong. We observed that older people were perceived as more moral than younger people. The effect was particularly salient when comparing 20-year-olds to either 40- or 60-year-olds and was culturally universal, as we found it in both WEIRD (i.e. Western, Educated, Industrialized, Rich, Democratic) and non-WEIRD societies.


Here is my summary:

The researchers found that older people were rated as more moral than younger people, and this effect was particularly strong when comparing 20-year-olds to either 40- or 60-year-olds. The effect was also consistent across cultures, suggesting that it is a universal phenomenon.

The researchers suggest that there are a few possible explanations for this finding. One possibility is that older people are simply seen as having more life experience and wisdom, which are both associated with morality. Another possibility is that older people are more likely to conform to social norms, which are often seen as being moral. Finally, it is also possible that people simply have a positive bias towards older people, which leads them to perceive them as being more moral.

Whatever the explanation, the finding that older people are perceived as more moral than younger people has a number of implications. For example, it suggests that older people may be more likely to be trusted and respected, and they may also be more likely to be seen as leaders. Additionally, the finding suggests that ageism may be a form of prejudice, as it involves making negative assumptions about people based on their age.

Wednesday, December 6, 2023

People are increasingly following their heart and not the Bible - poll

Ryan Foley
Christian Today
Originally published 2 DEC 23

A new study reveals that less than one-third of Americans believe the Bible should serve as the foundation for determining right and wrong, even as most people express support for traditional moral values.

The fourth installment of the America's Values Study, released by the Cultural Research Center at Arizona Christian University Tuesday, asked respondents for their thoughts on traditional moral values and what they would like to see as "America's foundation for determining right and wrong." The survey is based on responses from 2,275 U.S. adults collected in July 2022.

Overall, when asked to identify what they viewed as the primary determinant of right and wrong in the U.S., a plurality of participants (42%) said: "what you feel in your heart." An additional 29% cited majority rule as their desired method for determining right and wrong, while just 29% expressed a belief that the principles laid out in the Bible should determine the understanding of right and wrong in the U.S. That figure rose to 66% among Spiritually Active, Governance Engaged Conservative Christians.

The only other demographic subgroups where at least a plurality of respondents indicated a desire for the Bible to serve as the determinant of right and wrong in the U.S. were respondents who attend an evangelical church (62%), self-described Republicans (57%), theologically defined born-again Christians (54%), self-identified conservatives (49%), those who are at least 50 years of age (39%), members of all Protestant congregations (39%), self-identified Christians (38%) and those who attend mainline Protestant churches (36%).

By contrast, an outright majority of respondents who do not identify with a particular faith at all (53%), along with half of LGBT respondents (50%), self-described moderates (47%), political independents (47%), Democrats (46%), self-described liberals (46%) and Catholic Church attendees (46%) maintained that "what you feel in your heart" should form the foundation of what Americans view as right and wrong.

Sunday, November 26, 2023

How robots can learn to follow a moral code

Neil Savage
Nature.com
Originally posted 26 OCT 23

Here is an excerpt:

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple.

One potential approach for dealing with differing opinions on moral issues is what Levine calls a moral parliament. “This problem of who gets to decide is not just a problem for AI. It’s a problem for governance of a society,” she says. “We’re looking to ideas from governance to help us think through these AI problems.” Similar to a political assembly or parliament, she suggests representing multiple different views in an AI system. “We can have algorithmic representations of different moral positions,” she says. The system would then attempt to calculate what the likely consensus would be on a given issue, based on a concept from game theory called cooperative bargaining.


Here is my summary:

Autonomous robots will need to be able to make ethical decisions in order to safely and effectively interact with humans and the world around them.

The article proposes a number of ways that robots can be taught to follow a moral code. One approach is to use supervised learning, in which robots are trained on a dataset of moral dilemmas and their corresponding solutions. Another approach is to use reinforcement learning, in which robots are rewarded for making ethical decisions and punished for making unethical decisions.

The article also discusses the challenges of teaching robots to follow a moral code. One challenge is that moral codes are often complex and nuanced, and it can be difficult to define them in a way that can be understood by a robot. Another challenge is that moral codes can vary across cultures, and it is important to develop robots that can adapt to different moral frameworks.

The article concludes by arguing that teaching robots to follow a moral code is an important ethical challenge that we need to address as we develop more sophisticated artificial intelligence systems.

Thursday, November 9, 2023

Moral Future-Thinking: Does the Moral Circle Stand the Test of Time?

Law, K. F., Syropoulos, S., et al. (2023, August 10). 
PsyArXiv

Abstract

The long-term collective welfare of humanity may lie in the hands of those who are presently living. But do people normatively include future generations in their moral circles? Across four studies conducted on Prolific Academic (N Total=823), we find evidence for a progressive decline in the subjective moral standing of future generations, demonstrating decreasing perceived moral obligation, moral concern, and prosocial intentions towards other people with increasing temporal distance. While participants generally tend to display present-oriented moral preferences, we also reveal individual differences that mitigate this tendency and predict pro-future outcomes, including individual variation in longtermism beliefs and the vividness of one’s imagination. Our studies reconcile conflicting evidence in the extant literature on moral judgment and future-thinking, shed light on the role of temporal distance in moral circle expansion, and offer practical implications for better valuing and safeguarding the shared future of humanity.

Here's my summary:

This research investigates whether people normatively include future generations in their moral circles. The authors conducted four studies with a total of 823 participants, and found evidence for a progressive decline in the subjective moral standing of future generations with increasing temporal distance. This suggests that people generally tend to display present-oriented moral preferences.

However, the authors also found individual differences that mitigate this tendency and predict pro-future outcomes. These factors include individual variation in longtermism beliefs and the vividness of one's imagination. The authors also found that people are more likely to include future generations in their moral circles when they are primed to think about them or when they are asked to consider the long-term consequences of their actions.

The authors' findings reconcile conflicting evidence in the extant literature on moral judgment and future-thinking. They also shed light on the role of temporal distance in moral circle expansion and offer practical implications for better valuing and safeguarding the shared future of humanity.

Overall, the research paper provides evidence that people generally tend to prioritize the present over the future when making moral judgments. However, the authors also identify individual factors and contextual conditions that can promote moral future-thinking. These findings could be used to develop interventions that encourage people to consider the long-term consequences of their actions and to take steps to protect the well-being of future generations.

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Friday, October 20, 2023

Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

Huber, C., Dreber, A., et al. (2023).
PNAS of the United States of America, 120(23).

Abstract

Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity—variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity—estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs—indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis.

Significance

Using experiments involves leeway in choosing one out of many possible experimental designs. This choice constitutes a source of uncertainty in estimating the underlying effect size which is not incorporated into common research practices. This study presents the results of a crowd-sourced project in which 45 independent teams implemented research designs to address the same research question: Does competition affect moral behavior? We find a small adverse effect of competition on moral behavior in a meta-analysis involving 18,123 experimental participants. Importantly, however, the variation in effect size estimates across the 45 designs is substantially larger than the variation expected due to sampling errors. This “design heterogeneity” highlights that the generalizability and informativeness of individual experimental designs are limited.

Here are some of the key takeaways from the research:
  • Competition can have a small, but significant, negative effect on moral behavior.
  • This effect is likely due to the fact that competition can lead to people being more self-interested and less concerned about the well-being of others.
  • The findings of this research have important implications for our understanding of how competition affects moral behavior.

Thursday, October 5, 2023

Morality beyond the WEIRD: How the nomological network of morality varies across cultures

Atari, M., Haidt, J., et al. (2023).
Journal of Personality and Social Psychology.
Advance online publication.

Abstract

Moral foundations theory has been a generative framework in moral psychology in the last 2 decades. Here, we revisit the theory and develop a new measurement tool, the Moral Foundations Questionnaire–2 (MFQ-2), based on data from 25 populations. We demonstrate empirically that equality and proportionality are distinct moral foundations while retaining the other four existing foundations of care, loyalty, authority, and purity. Three studies were conducted to develop the MFQ-2 and to examine how the nomological network of moral foundations varies across 25 populations. Study 1 (N = 3,360, five populations) specified a refined top-down approach for measurement of moral foundations. Study 2 (N = 3,902, 19 populations) used a variety of methods (e.g., factor analysis, exploratory structural equations model, network psychometrics, alignment measurement equivalence) to provide evidence that the MFQ-2 fares well in terms of reliability and validity across cultural contexts. We also examined population-level, religious, ideological, and gender differences using the new measure. Study 3 (N = 1,410, three populations) provided evidence for convergent validity of the MFQ-2 scores, expanded the nomological network of the six moral foundations, and demonstrated the improved predictive power of the measure compared with the original MFQ. Importantly, our results showed how the nomological network of moral foundations varied across cultural contexts: consistent with a pluralistic view of morality, different foundations were influential in the network of moral foundations depending on cultural context. These studies sharpen the theoretical and methodological resolution of moral foundations theory and provide the field of moral psychology a more accurate instrument for investigating the many ways that moral conflicts and divisions are shaping the modern world.


Here's my summary:

The article examines how the moral foundations theory (MFT) of morality applies to cultures outside of the Western, Educated, Industrialized, Rich, and Democratic (WEIRD) world. MFT proposes that there are six universal moral foundations: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, sanctity/degradation, and liberty/oppression. However, previous research has shown that the relative importance of these foundations can vary across cultures.

The authors of the article conducted three studies to examine the nomological network of morality (i.e., the relationships between different moral foundations) in 25 populations. They found that the nomological network of morality varied significantly across cultures. For example, in some cultures, the foundation of care was more strongly related to the foundation of fairness, while in other cultures, the foundation of loyalty was more strongly related to the foundation of authority.

The authors argue that these findings suggest that MFT needs to be revised to take into account cultural variation. They propose that the nomological network of morality is shaped by a combination of universal moral principles and local cultural norms. This means that there is no single "correct" way to think about morality, and that what is considered moral in one culture may not be considered moral in another.

The article's findings have important implications for our understanding of morality and for cross-cultural research. They suggest that we need to be careful about making assumptions about the moral beliefs of people from other cultures. We also need to be aware of the ways in which culture can influence our own moral judgments.

Saturday, September 23, 2023

Moral injury in post-9/11 combat-experienced military veterans: A qualitative thematic analysis.

Kalmbach, K. C., Basinger, E. D.,  et al. (2023). 
Psychological Services. Advance online publication.

Abstract

War zone exposure is associated with enduring negative mental health effects and poorer responses to treatment, in part because this type of trauma can entail crises of conscience or moral injury. Although a great deal of attention has been paid to posttraumatic stress disorder and fear-based physiological aspects of trauma and suffering, comparatively less attention has been given to the morally injurious dimension of trauma. Robust themes of moral injury were identified in interviews with 26 post-9/11 military veterans. Thematic analysis identified 12 themes that were subsumed under four categories reflecting changes, shifts, or ruptures in worldview, meaning making, identity, and relationships. Moral injury is a unique and challenging clinical construct with impacts on the individual as well as at every level of the social ecological system. Recommendations are offered for addressing moral injury in a military population; implications for community public health are noted.

Impact Statement

Military veterans who experienced moral injury—events that violate deeply held moral convictions or beliefs—reported fundamental changes following the morally injurious event (MIE). The MIE ruptured their worldview, or sense of right and wrong, and they struggled to reconcile a prior belief system or identity with their existence post-MIE. Absent a specific evidence-based intervention, clinicians are encouraged to consider adaptations to existing treatment models but to be aware that moral injury often does not respond to treatment as usual for PTSD or adjacent comorbid conditions.

The article is paywalled, with the link noted above.

My addition:

The thematic analysis identified 12 themes related to moral injury, which were grouped into four categories:
  • Changes in worldview: Veterans who experienced moral injury often reported changes in their worldview, such as questioning their beliefs about the world, their place in it, and their own goodness.
  • Changes in meaning making: Veterans who experienced moral injury often struggled to make meaning of their experiences, which could lead to feelings of emptiness, despair, and hopelessness.
  • Changes in identity: Veterans who experienced moral injury often reported changes in their identity, such as feeling like they were no longer the same person they were before the war.
  • Changes in relationships: Veterans who experienced moral injury often reported changes in their relationships with family, friends, and others. They may have felt isolated, misunderstood, or ashamed of their experiences.

Friday, September 22, 2023

Police are Getting DNA Data from People who Think They Opted Out

Jordan Smith
The Intercept
Originally posted 18 Aug 23

Here is an excerpt:

The communications are a disturbing example of how genetic genealogists and their law enforcement partners, in their zeal to close criminal cases, skirt privacy rules put in place by DNA database companies to protect their customers. How common these practices are remains unknown, in part because police and prosecutors have fought to keep details of genetic investigations from being turned over to criminal defendants. As commercial DNA databases grow, and the use of forensic genetic genealogy as a crime-fighting tool expands, experts say the genetic privacy of millions of Americans is in jeopardy.

Moore did not respond to The Intercept’s requests for comment.

To Tiffany Roy, a DNA expert and lawyer, the fact that genetic genealogists have accessed private profiles — while simultaneously preaching about ethics — is troubling. “If we can’t trust these practitioners, we certainly cannot trust law enforcement,” she said. “These investigations have serious consequences; they involve people who have never been suspected of a crime.” At the very least, law enforcement actors should have a warrant to conduct a genetic genealogy search, she said. “Anything less is a serious violation of privacy.”

(cut)

Exploitation of the GEDmatch loophole isn’t the only example of genetic genealogists and their law enforcement partners playing fast and loose with the rules.

Law enforcement officers have used genetic genealogy to solve crimes that aren’t eligible for genetic investigation per company terms of service and Justice Department guidelines, which say the practice should be reserved for violent crimes like rape and murder only when all other “reasonable” avenues of investigation have failed. In May, CNN reported on a U.S. marshal who used genetic genealogy to solve a decades-old prison break in Nebraska. There is no prison break exception to the eligibility rules, Larkin noted in a post on her website. “This case should never have used forensic genetic genealogy in the first place.”

A month later, Larkin wrote about another violation, this time in a California case. The FBI and the Riverside County Regional Cold Case Homicide Team had identified the victim of a 1996 homicide using the MyHeritage database — an explicit violation of the company’s terms of service, which make clear that using the database for law enforcement purposes is “strictly prohibited” absent a court order.

“The case presents an example of ‘noble cause bias,’” Larkin wrote, “in which the investigators seem to feel that their objective is so worthy that they can break the rules in place to protect others.”


My take:

Forensic genetic genealogists have been skirting GEDmatch privacy rules by searching users who explicitly opted out of sharing DNA with law enforcement. This means that police can access the DNA of people who thought they were protecting their privacy by opting out of law enforcement searches.

The practice of forensic genetic genealogy has been used to solve a number of cold cases, but it has also raised concerns about privacy and civil liberties. Some people worry that the police could use DNA data to target innocent people or to build a genetic database of the entire population.

GEDmatch has since changed its privacy policy to make it more difficult for police to access DNA data from users who have opted out. However, the damage may already be done. Police have already used GEDmatch data to solve dozens of cases, and it is unclear how many people have had their DNA data accessed without their knowledge or consent.

Sunday, September 17, 2023

The Plunging Number of Primary Care Physicians Reaches a Tipping Point.

Elisabeth Rosenthal
KFF Health News
Originally posted 8 September 23

Here are two excerpts:

The percentage of U.S. doctors in adult primary care has been declining for years and is now about 25% — a tipping point beyond which many Americans won’t be able to find a family doctor at all.

Already, more than 100 million Americans don’t have usual access to primary care, a number that has nearly doubled since 2014. One reason our coronavirus vaccination rates were low compared with those in countries such as China, France, and Japan could be because so many of us no longer regularly see a familiar doctor we trust.

Another telling statistic: In 1980, 62% of doctor’s visits for adults 65 and older were for primary care and 38% were for specialists, according to Michael L. Barnett, a health systems researcher and primary care doctor in the Harvard Medical School system. By 2013, that ratio had exactly flipped and has likely “only gotten worse,” he said, noting sadly: “We have a specialty-driven system. Primary care is seen as a thankless, undesirable backwater.” That’s “tragic,” in his words — studies show that a strong foundation of primary care yields better health outcomes overall, greater equity in health care access, and lower per capita health costs.

One explanation for the disappearing primary care doctor is financial. The payment structure in the U.S. health system has long rewarded surgeries and procedures while shortchanging the diagnostic, prescriptive, and preventive work that is the province of primary care. Furthermore, the traditionally independent doctors in this field have little power to negotiate sustainable payments with the mammoth insurers in the U.S. market.

Faced with this situation, many independent primary care doctors have sold their practices to health systems or commercial management chains (some private equity-owned) so that, today, three-quarters of doctors are now employees of those outfits.

(cut)

Some relatively simple solutions are available, if we care enough about supporting this foundational part of a good medical system. Hospitals and commercial groups could invest some of the money they earn by replacing hips and knees to support primary care staffing; giving these doctors more face time with their patients would be good for their customers’ health and loyalty if not (always) the bottom line.

Reimbursement for primary care visits could be increased to reflect their value — perhaps by enacting a national primary care fee schedule, so these doctors won’t have to butt heads with insurers. And policymakers could consider forgiving the medical school debt of doctors who choose primary care as a profession.

They deserve support that allows them to do what they were trained to do: diagnosing, treating, and getting to know their patients.


Here is my warning:

The number of primary care physicians in the US is declining, and this trend is reaching a tipping point. More than 100 million Americans don't have usual access to primary care, and this number has nearly doubled since 2014. This shortage of primary care physicians could have a negative impact on public health, as people without access to primary care are more likely to delay or forgo needed care.

Sunday, September 10, 2023

Seeing and sanctioning structural unfairness

Flores-Robles, G., & Gantman, A. P. (2023, June 28).
PsyArXiv

Abstract

People tend to explain wrongdoing as the result of a bad actor or bad system. In five studies (four U.S. online convenience, one U.S. representative sample), we tested whether the way people understand unfairness affects how they sanction it. In Pilot 1A (N = 40), people interpreted unfair offers in an economic game as the result of a bad actor (vs. unfair rules), unless incentivized (Pilot 1B, N = 40), which, in Study 1 (N = 370), predicted costly punishment of individuals (vs. changing unfair rules). In Studies 2 (N = 500) and 3, (N = 470, representative of age, gender, and ethnicity in the U.S), we found that people paid to change the rules for the final round of the game (vs. punished individuals), when they were randomly assigned a bad system (vs. bad actor) explanation for prior identical unfair offers. Explanations for unfairness affect how people sanction it.

Statement of Relevance

Humans are facing massive problems including economic and social inequality. These problems are often framed in the media, and by friends and experts, as a problem either of individual action (e.g., racist beliefs) or of structures (e.g., discriminatory housing laws). The current research uses a context-free economic game to ask whether these explanations have any effect on what people think should happen next. We find that people tend to explain unfair offers in the game in terms of bad actors (unless incentivized) which is related to punishing individuals over changing the game itself.  When people are told that the unfairness they witnessed was the result of a bad actor, they prefer to punish that actor; when they are told that the same unfair behavior is the result of unfair rules, they prefer to change the rules. Our understanding of the mechanisms of inequality affect how we want to sanction it.

My summary:

The article discusses how people tend to explain wrongdoing as the result of a bad actor or bad system.  In essence, this is a human, decision-making process. The authors conducted five studies to test whether the way people understand unfairness affects how they sanction it. They found that people are more likely to punish individuals for unfair behavior when they believe that the behavior is the result of a bad actor. However, they are more likely to try to change the system (or the rules) when they believe that the behavior is the result of a bad system.

The authors argue that these findings have important implications for ethics, morality, and values. They suggest that we need to be more aware of the way we explain unfairness, because our explanations can influence how we respond to it. How an individual frames the issue is a key to correct possible solutions, as well as biases.  They also suggest that we need to be more critical of the systems that we live in, because these systems can create unfairness.

The article raises a number of ethical, moral, and value-related questions. For example, what is the responsibility of individuals to challenge unfair systems? What is the role of government in addressing structural unfairness? And what are the limits of individual and collective action in addressing unfairness?

The article does not provide easy answers to these questions. However, it does provide a valuable framework for thinking about unfairness and how we can respond to it.

Thursday, August 31, 2023

It’s not only political conservatives who worry about moral purity

K. Gray, W. Blakey, & N. DiMaggio
psychce.co
Originally posted 13 July 23

Here are two excerpts:

What does this have to do with differences in moral psychology? Well, moral psychologists have suggested that politically charged arguments about sexuality, spirituality and other subjects reflect deep differences in the moral values of liberals and conservatives. Research involving scenarios like this one has seemed to indicate that conservatives, unlike liberals, think that maintaining ‘purity’ is a moral good in itself – which for them might mean supporting what they construe as the ‘sanctity of marriage’, for example.

It may seem strange to think about ‘purity’ as a core driver of political differences. But purity, in the moral sense, is an old concept. It pops up in the Hebrew Bible a lot, in taboos around food, menstruation, and divine encounters. When Moses meets God at the Burning Bush, God says to Moses: ‘Do not come any closer, take off your sandals, for the place where you are standing is holy ground.’ Why does God tell Moses to take off his shoes? Not because his shoes magically hurt God, but because shoes are dirty, and it’s disrespectful to wear your shoes in the presence of the creator of the universe. Similarly, in ancient Greece, worshippers were often required to endure long purification rituals before looking at sacred religious idols or engaging in different spiritual rites. These ancient moral practices seem to reflect an intuition that ‘cleanliness is next to Godliness’.

In the modern era, purity has repeatedly appeared at the centre of political battlegrounds, as in clashes between US conservatives and liberals over sexual education and mores in the 1990s. It was around this time that the psychologist Jonathan Haidt began formulating a theory to help explain the moral divide. Moral foundations theory argues that liberals and conservatives are divided because they rely on distinct moral values, including purity, to different degrees.

(cut)

A harm-focused perspective on moral judgments related to ‘purity’ could help us better understand and communicate with moral opponents. We all grasp the importance of protecting ourselves and our loved ones from harm. Learning that people on the ‘other side’ of a political divide care about questions of purity because they connect these to their understanding of harm can help us empathise with different moral opinions. It is easy for a liberal to dismiss a conservative’s condemnation of dead-chicken sex when it is merely said to be ‘impure’; it is harder to be dismissive if it’s suggested that someone who makes a habit of that behaviour might end up harming people.

Explicitly grounding discussions of morality in perceptions of harm could help us all to be better citizens of a ‘small-L liberal’ society – one in which the right to swing our fists ends where others’ noses begin. If something seems disgusting, impure and immoral to you, take some time to try to articulate the harms you intuitively perceive. Talking about these potential harms may help other people understand where you are coming from. Of course, someone might not share your judgment that harm is being done. But identifying perceived harms at least puts the conversation in terms that everyone understands.


Here is my summary:

The authors define purity as "the state of being free from contamination or pollution."  They argue that people on both the left and the right care about purity because they associate it with safety and well-being.
They provide examples of how liberals and conservatives can both use purity-related language, such as "desecrate" and "toxic." They propose a new explanation of moral judgments that suggests that people care about purity when they perceive that 'impure' acts can lead to harm.

Tuesday, August 22, 2023

The (moral) language of hate

Brendan Kennedy et al.
PNAS Nexus, Volume 2,
Issue 7, July 2023, 210

Abstract

Humans use language toward hateful ends, inciting violence and genocide, intimidating and denigrating others based on their identity. Despite efforts to better address the language of hate in the public sphere, the psychological processes involved in hateful language remain unclear. In this work, we hypothesize that morality and hate are concomitant in language. In a series of studies, we find evidence in support of this hypothesis using language from a diverse array of contexts, including the use of hateful language in propaganda to inspire genocide (Study 1), hateful slurs as they occur in large text corpora across a multitude of languages (Study 2), and hate speech on social-media platforms (Study 3). In post hoc analyses focusing on particular moral concerns, we found that the type of moral content invoked through hate speech varied by context, with Purity language prominent in hateful propaganda and online hate speech and Loyalty language invoked in hateful slurs across languages. Our findings provide a new psychological lens for understanding hateful language and points to further research into the intersection of morality and hate, with practical implications for mitigating hateful rhetoric online.

Significance Statement

Only recently have researchers begun to propose that violence and prejudice may have roots in moral intuitions. Can it be the case, we ask, that the act of verbalizing hatred involves a moral component, and that hateful and moral language are inseparable constructs? Across three studies focusing on the language of morality and hate, including historical text analysis of Nazi propaganda, implicit associations across 25 languages, and extremist right-wing communications on social media, we demonstrate that moral language, and specifically, Purity-related language (i.e. language about physical purity, avoidance of disgusting things, and resisting our carnal desires in favor of a higher, divine nature) and Loyalty related language are concomitant with hateful and exclusionary language.

-----------------

Here are some of the key findings of the study:
  • Hateful language is often associated with moral foundations such as purity, loyalty, and authority.
  • The type of moral content invoked through hate speech varies by context.
  • Purity language is prominent in hateful propaganda and online hate speech.
  • Loyalty language is invoked in hateful slurs across languages.
  • Authority language is invoked in hateful rhetoric that targets political figures or institutions.
The study's findings have important implications for understanding and mitigating hate speech.  By understanding the moral foundations that underlie hateful language, we can develop more effective strategies for countering it. For example, we can challenge the moral claims made by hate speech and offer alternative moral frameworks that promote tolerance and understanding.

Sunday, August 13, 2023

Beyond killing one to save five: Sensitivity to ratio and probability in moral judgment

Ryazanov, A.A., Wang, S.T, et al. (2023).
Journal of Experimental Social Psychology
Volume 108, September 2023, 104499

Abstract

A great deal of current research on moral judgments centers on moral dilemmas concerning tradeoffs between one and five lives. Whether one considers killing one innocent person to save five others to be morally required or impermissible has been taken to determine whether one is appealing to consequentialist or non-consequentialist reasoning. But this focus on tradeoffs between one and five may obscure more nuanced commitments involved in moral decision-making that are revealed when the numbers and ratio of lives to be traded off are varied, and when the probabilities of each outcome occurring are less than certain. Four studies examine participants' reactions to scenarios that diverge in these ways from the standard ones. Study 1 examines the extent to which people are sensitive to the ratio of lives saved to lives ended by a particular action. Study 2 verifies that the ratio rather than the difference between the two values is operative. Study 3 examines whether participants treat probabilistic harm to some as equivalent to certainly harming fewer, holding expected ratio constant. Study 4 explores an analogous issue regarding the sensitivity of probabilistic saving. Participants are remarkably sensitive to expected ratio for probabilistic harms while deviating from expected value for probabilistic saving. Collectively, the studies provide evidence that people's moral judgments are consistent with the principle of threshold deontology.

General discussion

Collectively, our studies show that people are sensitive to expected ratio in moral dilemmas, and that they show this sensitivity across a range of probabilities. The particular kind of sensitivity to expected value participants display is consistent with the view that people's moral judgments are based on one single principle of threshold deontology. If one examines only participants' reactions to a single dilemma with a given ratio, one might naturally tend to sort participants' judgments into consequentialists (the ones who condone killing to save others) or non-consequentialists (the ones who do not). But this can be misleading, as is shown by the result that the number of participants who make judgments consistent with consequentialism in a scenario with ratio of 5:1 decreases when the ratio decreases (as if a larger number of people endorse deontological principles under this lower ratio). The fact that participants make some judgments that are consistent with consequentialism does not entail that these judgments are expressive of a generally consequentialist moral theory. When the larger set of judgments is taken into account, the only theory with which they are consistent is threshold deontology. On this theory, there is a general deontological constraint against killing, but this constraint is overridden when the consequences of inaction are bad enough. The variability across participants suggests that participants have different thresholds of the ratio at which the consequences count as “bad enough” for switching from supporting inaction to supporting action. This is consistent with the wide literature showing that participants' judgments can shift within the same ratio, depending on, for example, how the death of the one is caused.


My summary:

This research provides new insights into how people make moral judgments. It suggests that people are not simply weighing the number of lives saved against the number of lives lost, but that they are also taking into account the ratio of lives saved to lives lost and the probability of each outcome occurring. This research has important implications for our understanding of moral decision-making and for the development of moral education programs.