Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Empathy. Show all posts
Showing posts with label Empathy. Show all posts

Thursday, February 29, 2024

Empathy Trends in American Youth Between 1979 and 2018: An Update

Konrath, S., et al. (2023).
Social Psychological and Personality Science, 0(0).

Abstract

Previous research has found declining dispositional empathy among American youth from 1979 to 2009. We update these trends until 2018, using three datasets. Study 1 presents a cross-temporal meta-analysis of undergraduates’ empathy (Interpersonal Reactivity Index), finding significant cubic trends over time: perspective taking (PT) and empathic concern (EC) both increased since 2009. Study 2 conceptually replicated these findings using nationally representative datasets, also showing increasing PT (Study 2a: American Freshman Survey) and EC (Study 2b: Monitoring the Future Survey) since 2009. We include economic, interpersonal, and worldview covariates to test for potential explanations, finding evidence that empathy trends may be related to recent changes in interpersonal dynamics.


Summary:

Shifting trend: Contrary to earlier studies, researchers found that empathy among college students has increased since 2009 in two key dimensions: perspective taking (understanding another's viewpoint) and empathic concern (sharing another's feelings).

Data sources: The study used three datasets: a meta-analysis of college students' self-reported empathy, a nationally representative survey of freshmen (American Freshman Survey), and another national survey of high school students (Monitoring the Future Survey).

Possible explanations: The reasons for the shift are explored, with potential factors including changes in interpersonal dynamics, increased exposure to diverse perspectives through technology, and growing involvement in social movements emphasizing empathy and social justice.

Overall, the research suggests that the story of empathy in American youth may be more nuanced than previously thought. While earlier studies documented a decline, recent data points towards a possible reversal. Understanding the factors influencing empathy trends is crucial for fostering a more compassionate and connected society.

The study highlights the importance of using multiple data sources and different measurement methods to gain a comprehensive understanding of complex social phenomena.  Further research is needed to confirm the trend and explore its causes in more detail.

Wednesday, February 7, 2024

Listening to bridge societal divides

Santoro, E., & Markus, H. R. (2023).
Current opinion in psychology, 54, 101696.

Abstract

The U.S. is plagued by a variety of societal divides across political orientation, race, and gender, among others. Listening has the potential to be a key element in spanning these divides. Moreover, the benefits of listening for mitigating social division has become a culturally popular idea and practice. Recent evidence suggests that listening can bridge divides in at least two ways: by improving outgroup sentiment and by granting outgroup members greater status and respect. When reviewing this literature, we pay particular attention to mechanisms and to boundary conditions, as well as to the possibility that listening can backfire. We also review a variety of current interventions designed to encourage and improve listening at all levels of the culture cycle. The combination of recent evidence and the growing popular belief in the significance of listening heralds a bright future for research on the many ways that listening can diffuse stereotypes and improve attitudes underlying intergroup division.

The article is paywalled, which is not really helpful in spreading the word.  This information can be very helpful in couples and family therapy.  Here are my thoughts:

The idea that listening can help bridge societal divides is a powerful one. When we truly listen to someone from a different background, we open ourselves up to understanding their perspective and experiences. This can help to break down stereotypes and foster empathy.

Benefits of Listening:
  • Reduces prejudice: Studies have shown that listening to people from different groups can help to reduce prejudice. When we hear the stories of others, we are more likely to see them as individuals, rather than as members of a stereotyped group.
  • Builds trust: Listening can help to build trust between people from different groups. When we show that we are willing to listen to each other, we demonstrate that we are open to understanding and respecting each other's views.
  • Finds common ground: Even when people disagree, listening can help them to find common ground. By focusing on areas of agreement, rather than on differences, we can build a foundation for cooperation and collaboration.
Challenges of Listening:

It is important to acknowledge that listening is not always easy. There are a number of challenges that can make it difficult to truly hear and understand someone from a different background. These challenges include:
  • Bias: We all have biases, and these biases can influence the way we listen to others. It is important to be aware of our own biases and to try to set them aside when we are listening to someone else.
  • Distraction: In today's world, there are many distractions that can make it difficult to focus on what someone else is saying. It is important to create a quiet and distraction-free environment when we are trying to have a meaningful conversation with someone.
  • Discomfort: Talking about difficult topics can be uncomfortable. However, it is important to be willing to listen to these conversations, even if they make us feel uncomfortable.
Tips for Effective Listening:
  • Pay attention: Make eye contact and avoid interrupting the speaker.
  • Be open-minded: Try to see things from the speaker's perspective, even if you disagree with them.
  • Ask questions: Ask clarifying questions to make sure you understand what the speaker is saying.
  • Summarize: Briefly summarize what you have heard to show that you were paying attention.
  • By practicing these tips, we can become more effective listeners and, in turn, help to bridge the divides that separate us.

Wednesday, December 20, 2023

Dehumanization: Beyond the Intergroup to the Interpersonal

Karantzas, G. C., Simpson, J. A., & Haslam, N. (2023).
Current Directions in Psychological Science, 0(0).

Abstract

Over the past two decades, there has been a significant shift in how dehumanization is conceptualized and studied. This shift has broadened the construct from the blatant denial of humanness to groups to include more subtle dehumanization within people’s interpersonal relationships. In this article, we focus on conceptual and empirical advances in the study of dehumanization in interpersonal relationships, with a particular focus on dehumanizing behaviors. In the first section, we describe the concept of interpersonal dehumanization. In the second section, we review social cognitive and behavioral research into interpersonal dehumanization. Within this section, we place special emphasis on the conceptualization and measurement of dehumanizing behaviors. We then propose a conceptual model of interpersonal dehumanization to guide future research. While doing so, we provide a novel review and integration of cutting-edge research on interpersonal dehumanization.

Conclusion

This review shines a spotlight on interpersonal dehumanization, with a specific emphasis on dehumanizing behaviors. Our review highlights that interpersonal dehumanization is a rapidly expanding and innovative field of research. It provides a clearer understanding of the current and emerging directions of research investigating how even subtle forms of negative behavior may, at times, thwart social connection and human bonding. It also provides a theoretical platform for scholars to launch new streams of research on interpersonal dehumanization processes and outcomes.

My summary

Traditionally, dehumanization has been studied in the context of intergroup conflict and prejudice, where individuals or groups are perceived as less human than others. However, recent research has demonstrated that dehumanization can also manifest in interpersonal interactions, affecting how individuals perceive, treat, and interact with each other.

The article argues that interpersonal dehumanization is a prevalent and impactful phenomenon that can have significant consequences for both individuals and relationships. It can lead to reduced empathy, increased hostility, and justification for aggression and violence.

The authors propose a conceptual model of interpersonal dehumanization that identifies three key components:

Dehumanizing Cognitions & Perceptions: The tendency to view others as less human-like, lacking essential human qualities like emotions, thoughts, and feelings.

Dehumanizing Behaviors: Actions or expressions that convey a disregard for another's humanity, such as insults, mockery, or exclusion.

Dehumanizing Consequences: The negative effects of dehumanization on individuals and relationships, including reduced empathy, increased hostility, and justification for aggression.

By understanding the mechanisms and consequences of interpersonal dehumanization, we can better address its prevalence and mitigate its harmful effects. The article concludes by emphasizing the importance of fostering empathy, promoting inclusive environments, and encouraging respectful interactions to combat dehumanization and promote healthy interpersonal relationships.

Thursday, December 14, 2023

Ethical Reasoning vs. Empathic Bias: A False Dichotomy?

Law, K. F., Amormino, P. et al.
(2023, September 5).

Abstract

Does empathy necessarily impede equity in altruism? Emerging findings from cognitive and affective science suggest that rationality and empathy are mutually compatible, contradicting some earlier, prominent arguments that empathy impedes equitable giving. We propose alternative conceptualizations of relationships among empathy, rationality, and equity, drawing on interdisciplinary advances in altruism research.

Here is my summary: 

This article discusses the relationship between ethical reasoning and empathic bias. Ethical reasoning is the process of using logic and reason to make moral decisions. Empathic bias is the tendency to make moral decisions that are influenced by our emotions and our personal relationships with the people involved.

The article argues that these two concepts are often seen as being in opposition to each other, but that this is a false dichotomy. Both ethical reasoning and empathic bias are important for making moral decisions. Ethical reasoning allows us to think about the broader implications of our decisions, while empathic bias allows us to connect with the individuals who are affected by our decisions.

The article concludes by suggesting that we should strive to use both ethical reasoning and empathic bias in our moral decision-making. By doing so, we can make more informed and compassionate decisions.

This article demonstrates that the Ethical Acculturation Model is not widely known by researchers.  The EAM stresses the professional's ability to integrate their own professional obligations and norms with their personal beliefs, values, and morality. Only through blending these important components can individuals resolve complex ethical/moral dilemmas.

Monday, June 26, 2023

Characterizing empathy and compassion using computational linguistic analysis

Yaden, D. B., Giorgi, S., et al. (2023). 
Emotion. Advance online publication.

Abstract

Many scholars have proposed that feeling what we believe others are feeling—often known as “empathy”—is essential for other-regarding sentiments and plays an important role in our moral lives. Caring for and about others (without necessarily sharing their feelings)—often known as “compassion”—is also frequently discussed as a relevant force for prosocial motivation and action. Here, we explore the relationship between empathy and compassion using the methods of computational linguistics. Analyses of 2,356,916 Facebook posts suggest that individuals (N = 2,781) high in empathy use different language than those high in compassion, after accounting for shared variance between these constructs. Empathic people, controlling for compassion, often use self-focused language and write about negative feelings, social isolation, and feeling overwhelmed. Compassionate people, controlling for empathy, often use other-focused language and write about positive feelings and social connections. In addition, high empathy without compassion is related to negative health outcomes, while high compassion without empathy is related to positive health outcomes, positive lifestyle choices, and charitable giving. Such findings favor an approach to moral motivation that is grounded in compassion rather than empathy.

From the General Discussion

Linguistic topics related to compassion (without empathy) and empathy (without compassion) show clear relationships with four of the five personality factors. Topics related to compassion without empathy are marked by higher conscientiousness, extraversion, agreeableness, and emotional stability. Empathy without compassion topics are more associated with introversion and are also moderately associated with neuroticism and lower conscientiousness.  The association of low emotional stability and conscientiousness is also in line with prior research that found “distress,”a construct with important parallels to empathy, being associated with fleeing from a helping situation (Batson et al., 1987) and with lower helping(Jordan et al., 2016;Schroeder et al., 1988; Twenge et al., 2007; and others).

In sum, it appears that compassion without empathy and empathy without compassion are at least somewhat distinct and have unique predictive validity in personality, health, and prosocial behavior.  While the mechanisms through which these different relationships occur remain unknown, some previous work bears on this issue.  As mentioned, other work has found that merely focusing on others resulted in more intentions to help others (Bloom, 2017;Davis,1983;Jordan et al., 2016), which helps to explain the relationship between the more other-focused compassion and donation behavior that we observed.


In sum, high empathy without compassion is related to negative health outcomes, while high compassion without empathy is related to positive health outcomes. These findings suggest that compassion may be a more important factor for moral motivation than empathy.  Too much empathy may be overwhelming for high quality care.  Care about feelings, don't absorb the sharing of feelings.

Saturday, May 20, 2023

ChatGPT Answers Beat Physicians' on Info, Patient Empathy, Study Finds

Michael DePeau-Wilson
MedPage Today
Originally published 28 April 23

The artificial intelligence (AI) chatbot ChatGPT outperformed physicians when answering patient questions, based on quality of response and empathy, according to a cross-sectional study.

Of 195 exchanges, evaluators preferred ChatGPT responses to physician responses in 78.6% (95% CI 75.0-81.8) of the 585 evaluations, reported John Ayers, PhD, MA, of the Qualcomm Institute at the University of California San Diego in La Jolla, and co-authors.

The AI chatbot responses were given a significantly higher quality rating than physician responses (t=13.3, P<0.001), with the proportion of responses rated as good or very good quality (≥4) higher for ChatGPT (78.5%) than physicians (22.1%), amounting to a 3.6 times higher prevalence of good or very good quality responses for the chatbot, they noted in JAMA Internal Medicine in a new tab or window.

Furthermore, ChatGPT's responses were rated as being significantly more empathetic than physician responses (t=18.9, P<0.001), with the proportion of responses rated as empathetic or very empathetic (≥4) higher for ChatGPT (45.1%) than for physicians (4.6%), amounting to a 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

"ChatGPT provides a better answer," Ayers told MedPage Today. "I think of our study as a phase zero study, and it clearly shows that ChatGPT wins in a landslide compared to physicians, and I wouldn't say we expected that at all."

He said they were trying to figure out how ChatGPT, developed by OpenAI, could potentially help resolve the burden of answering patient messages for physicians, which he noted is a well-documented contributor to burnout.

Ayers said that he approached this study with his focus on another population as well, pointing out that the burnout crisis might be affecting roughly 1.1 million providers across the U.S., but it is also affecting about 329 million patients who are engaging with overburdened healthcare professionals.

(cut)

"Physicians will need to learn how to integrate these tools into clinical practice, defining clear boundaries between full, supervised, and proscribed autonomy," he added. "And yet, I am cautiously optimistic about a future of improved healthcare system efficiency, better patient outcomes, and reduced burnout."

After seeing the results of this study, Ayers thinks that the research community should be working on randomized controlled trials to study the effects of AI messaging, so that the future development of AI models will be able to account for patient outcomes.

Thursday, March 23, 2023

Are there really so many moral emotions? Carving morality at its functional joints

Fitouchi L., André J., & Baumard N.
To appear in L. Al-Shawaf & T. K. Shackelford (Eds.)
The Oxford Handbook of Evolution and the Emotions.
New York: Oxford University Press.

Abstract

In recent decades, a large body of work has highlighted the importance of emotional processes in moral cognition. Since then, a heterogeneous bundle of emotions as varied as anger, guilt, shame, contempt, empathy, gratitude, and disgust have been proposed to play an essential role in moral psychology.  However, the inclusion of these emotions in the moral domain often lacks a clear functional rationale, generating conflations between merely social and properly moral emotions. Here, we build on (i) evolutionary theories of morality as an adaptation for attracting others’ cooperative investments, and on (ii) specifications of the distinctive form and content of moral cognitive representations. On this basis, we argue that only indignation (“moral anger”) and guilt can be rigorously characterized as moral emotions, operating on distinctively moral representations. Indignation functions to reclaim benefits to which one is morally entitled, without exceeding the limits of justice. Guilt functions to motivate individuals to compensate their violations of moral contracts. By contrast, other proposed moral emotions (e.g. empathy, shame, disgust) appear only superficially associated with moral cognitive contents and adaptive challenges. Shame doesn’t track, by design, the respect of moral obligations, but rather social valuation, the two being not necessarily aligned. Empathy functions to motivate prosocial behavior between interdependent individuals, independently of, and sometimes even in contradiction with the prescriptions of moral intuitions. While disgust is often hypothesized to have acquired a moral role beyond its pathogen-avoidance function, we argue that both evolutionary rationales and psychological evidence for this claim remain inconclusive for now.

Conclusion

In this chapter, we have suggested that a specification of the form and function of moral representations leads to a clearer picture of moral emotions. In particular, it enables a principled distinction between moral and non-moral emotions, based on the particular types of cognitive representations they process. Moral representations have a specific content: they represent a precise quantity of benefits that cooperative partners owe each other, a legitimate allocation of costs and benefits that ought to be, irrespective of whether it is achieved by people’s actual behaviors. Humans intuit that they have a duty not to betray their coalition, that innocent people do not deserve to be harmed, that their partner has a right not to be cheated on. Moral emotions can thus be defined as superordinate programs orchestrating cognition, physiology and behavior in accordance with the specific information encoded in these moral representations.    On this basis, indignation and guilt appear as prototypical moral emotions. Indignation (“moral anger”) is activated when one receives fewer benefits than one deserves, and recruits bargaining mechanisms to enforce the violated moral contract. Guilt, symmetrically, is sensitive to one’s failure to honor one’s obligations toward others, and motivates compensation to provide them the missing benefits they deserve. By contrast, often-proposed “moral” emotions – shame, empathy, disgust – seem not to function to compute distinctively moral representations of cooperative obligations, but serve other, non-moral functions – social status management, interdependence, and pathogen avoidance (Figure 2). 

Friday, January 20, 2023

Teaching Empathy to Mental Health Practitioners and Trainees

Ngo, H., Sokolovic, et al. (2022).
Journal of Consulting and Clinical Psychology,
90(11), 851–860.
https://doi.org/10.1037/ccp0000773

Objective:
Empathy is a foundational therapeutic skill and a key contributor to client outcome, yet the best combination of instructional components for its training is unclear. We sought to address this by investigating the most effective instructional components (didactic, rehearsal, reflection, observation, feedback, mindfulness) and their combinations for teaching empathy to practitioners.

Method: 
Studies included were randomized controlled trials targeted to mental health practitioners and trainees, included a quantitative measure of empathic skill, and were available in English. A total of 36 studies (37 samples) were included (N = 1,616). Two reviewers independently extracted data. Data were pooled by using random-effects pairwise meta-analysis and network meta-analysis (NMA).

Results:
Overall, empathy interventions demonstrated a medium-to-large effect (d = .78, 95% CI [.58, .99]). Pairwise meta-analysis showed that one of the six instructional components was effective: didactic (d = .91 vs. d = .39, p = .02). None of the program characteristics significantly impacted intervention effectiveness (group vs. individual format, facilitator type, number of sessions). No publication bias, risk of bias, or outliers were detected. NMA, which allows for an examination of instructional component combinations, revealed didactic, observation, and rehearsal were included among the most effective components to operate in combination.

Conclusions:
We have identified instructional component, singly (didactic) and in combination (didactic, rehearsal, observation), that provides an efficient way to train empathy in mental health practitioners.

What is the public health significance of this article?

Empathy in mental health practitioners is a core skill associated with positive client outcomes, with evidence that it can be trained. This article provides an aggregation of evidence showing that didactic teaching, as well as trainees observing and practicing the skill, are the elements of training that are most important.

From the Discussion

Despite clear evidence on why empathy should be taught to mental health practitioners and how well empathy interventions work in other professionals, there has been no systematic integration on how best empathy should be taught to those working in mental health. Thus, the present study sought to address this important gap by applying pairwise and network meta-analytic analyses. In effect, we were able to elucidate the efficacious “ingredients” for teaching empathy to mental health practitioners as well as the relative superiority of particular combinations of instructional components. Overall, the effect sizes of empathy interventions were in the moderate to large range (d = .78; 95% CI [.55, .99]), which is comparable to previous meta-analyses of randomized controlled trials (RCTs) of empathy interventions within medical students (d = .68, Fragkos & Crampton, 2020), health care practitioners (d = .80, Kiosses et al., 2016; d = .52, Winter et al., 2020), and mixed trainees (adjusted g = .51; Teding van Berkhout & Malouff, 2016). This effect size means that over 78% of those who underwent empathy training will score above the mean of the control group, a result that clearly supports empathy as a trainable skill. 

Sunday, December 18, 2022

Beliefs about humanity, not higher power, predict extraordinary altruism

Amormino, P., O'Connell, et al.
Journal of Research in Personality
Volume 101, December 2022, 104313

Abstract

Using a rare sample of altruistic kidney donors (n = 56, each of whom had donated a kidney to a stranger) and demographically similar controls (n = 75), we investigated how beliefs about human nature correspond to extraordinary altruism. Extraordinary altruists were less likely than controls to believe that humans can be truly evil. Results persisted after controlling for trait empathy and religiosity. Belief in pure good was not associated with extraordinary altruism. We found no differences in the religiosity and spirituality of extraordinary altruists compared to controls. Findings suggest that highly altruistic individuals believe that others deserve help regardless of their potential moral shortcomings. Results provide preliminary evidence that lower levels of cynicism motivate costly, non-normative altruism toward strangers.

Discussion

We found for the first time a significant negative relationship between real-world acts of altruism toward strangers and the belief that humans can be purely evil. Specifically, our results showed that adults who have engaged in costly altruism toward strangers are distinguished from typical adults by their reduced tendency to believe that humans can be purely evil. By contrast, altruists were no more likely than controls to believe that humans can be purely good. These patterns could not be accounted for by demographic differences, differences in self reported empathy, or differences in religious or spiritual beliefs.

This finding could be viewed as paradoxical, in that extraordinary altruists are themselves often viewed as the epitome of pure good—even described as “saints” in the scholarly literature (Henderson et al., 2003).
But our findings suggest that the willingness to provide costly aid for anonymous strangers may not require believing that others are purely \good (i.e., that morally infallible people exist), but rather believing that there is at least a little bit of good in everyone. Thus, extraordinary altruists are not overly optimistic about the moral goodness of other people but are willing to act altruistically towards morally imperfect people anyway. Although the concept of “pure evil” is conceptually linked to spiritual phenomena, we did not find any evidence directly linking altruists’ beliefs in evil to spirituality or religion.

 (cut)

Conclusions

Because altruistic kidney donations to anonymous strangers satisfy the most stringent definitions of costly altruism (Clavien & Chapuisat, 2013), the study of these altruists can provide valuable insight into the nature of altruism, much as studying other rare, ecologically valid populations has yielded insights into psychological phenomena such asmemory (LePort et al., 2012) and face processing (Russell, Duchaine, &
Nakayama, 2009). Results show that altruists report lower belief in pure evil, which extends previous literature showing that higher levels of generalized trust and lower levels of cynicism and are associated with everyday prosocial behavior (Turner & Valentine, 2001). Our findings provide preliminary evidence that beliefs about the morality of people in general, and the goodness (or rather, lack of badness) of other humans may help motivate real-world costly altruistic acts toward strangers.

Monday, November 21, 2022

AI Isn’t Ready to Make Unsupervised Decisions

Joe McKendrick and Andy Thurai
Harvard Business Review
Originally published September 15, 2022

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

Consider the “trolley problem” — a hypothetical social scenario, formulated long before AI came into being, in which a decision has to be made whether to alter the route of an out-of-control streetcar heading towards a disaster zone. The decision that needs to be made — in a split second — is whether to switch from the original track where the streetcar may kill several people tied to the track, to an alternative track where, presumably, a single person would die.

While there are many other analogies that can be made about difficult decisions, the trolley problem is regarded to be the pinnacle exhibition of ethical and moral decision making. Can this be applied to AI systems to measure whether AI is ready for the real world, in which machines can think independently, and make the same ethical and moral decisions, that are justifiable, that humans would make?

Trolley problems in AI come in all shapes and sizes, and decisions don’t necessarily need to be so deadly — though the decisions AI renders could mean trouble for a business, individual, or even society at large. One of the co-authors of this article recently encountered his own AI “trolley moment,” during a stay in an Airbnb-rented house in upstate New Hampshire. Despite amazing preview pictures and positive reviews, the place was poorly maintained and a dump with condemned adjacent houses. The author was going to give the place a low one-star rating and a negative review, to warn others considering a stay.

However, on the second morning of the stay, the host of the house, a sweet and caring elderly woman, knocked on the door, inquiring if the author and his family were comfortable and if they had everything they needed. During the conversation, the host offered to pick up some fresh fruits from a nearby farmers market. She also said she doesn’t have a car, she would walk a mile to a friend’s place, who would then drive her to the market. She also described her hardships over the past two years, as rentals slumped due to Covid and that she is caring for someone sick full time.

Upon learning this, the author elected not to post the negative review. While the initial decision — to write a negative review — was based on facts, the decision not to post the review was purely a subjective human decision. In this case, the trolley problem was concern for the welfare of the elderly homeowner superseding consideration for the comfort of other potential guests.

How would an AI program have handled this situation? Likely not as sympathetically for the homeowner. It would have delivered a fact-based decision without empathy for the human lives involved.

Friday, August 19, 2022

Too cynical to reconnect: Cynicism moderates the effect of social exclusion on prosociality through empathy

B. K. C. Choy, K. Eom, & N. P. Li
Personality and Individual Differences
Volume 178, August 2021, 110871

Abstract

Extant findings are mixed on whether social exclusion impacts prosociality. We propose one factor that may underlie the mixed results: Cynicism. Specifically, cynicism may moderate the exclusion-prosociality link by influencing interpersonal empathy. Compared to less cynical individuals, we expected highly cynical individuals who were excluded to experience less empathy and, consequently, less prosocial behavior. Using an online ball-tossing game, participants were randomly assigned to an exclusion or inclusion condition. Consistent with our predictions, the effect of social exclusion on prosociality through empathy was contingent on cynicism, such that only less-cynical individuals responded to exclusion with greater empathy, which, in turn, was associated with higher levels of prosocial behavior. We further showed this effect to hold for cynicism, but not other similar traits typically characterized by high disagreeableness. Findings contribute to the social exclusion literature by suggesting a key variable that may moderate social exclusion's impact on resultant empathy and prosocial behavior and are consistent with the perspective that people who are excluded try to not only become included again but to establish alliances characterized by reciprocity.

From the Discussion

While others have proposed that empathy may be reflexively inhibited upon exclusion (DeWall & Baumeister, 2006; Twenge et al., 2007), our findings indicate that this process of inhibition—at least for empathy—may be more flexible than previously thought. If reflexive, individuals would have shown a similar level of empathy regardless of cynicism. That highly- and less-cynical individuals displayed different levels of empathy indicates that some other processes are in play. Our interpretation is that the process through which empathy is exhibited or inhibited may depend on one’s appraisals of the physical and social situation. 

Importantly, unlike cynicism, other similarly disagreeable dispositional traits such as Machiavellianism, psychopathy, and SDO (Social Dominance Orientation) did not modulate the empathy-mediated link between social exclusion and prosociality. This suggests that cynicism is conceptually different from other traits of a seemingly negative nature. Indeed, whereas cynics may hold a negative view of the intentions of others around them, Machiavellians are characterized by a negative view of others’ competence and a pragmatic and strategic approach to social interactions (Jones, 2016). Similarly, whereas cynics view others’ emotions as ingenuine, psychopathic individuals are further distinguished by their high levels of callousness and impulsivity (Paulhus, 2014). Likewise, whereas cynics may view the world as inherently competitive, they may not display the same preference for hierarchy that high-SDO individuals do (Ho et al., 21015). Thus, despite the similarities between these traits, our findings affirm their substantive differences from cynicism. 

Monday, May 16, 2022

Exploring the Association between Character Strengths and Moral Functioning

Han, H., Dawson, K. J., et al. 
(2022, April 6). PsyArXiv
https://doi.org/10.1080/10508422.2022.2063867

Abstract

We explored the relationship between 24 character strengths measured by the Global Assessment of Character Strengths (GACS), which was revised from the original VIA instrument, and moral functioning comprising postconventional moral reasoning, empathic traits and moral identity. Bayesian Model Averaging (BMA) was employed to explore the best models, which were more parsimonious than full regression models estimated through frequentist regression, predicting moral functioning indicators with the 24 candidate character strength predictors. Our exploration was conducted with a dataset collected from 666 college students at a public university in the Southern United States. Results showed that character strengths as measured by GACS partially predicted relevant moral functioning indicators. Performance evaluation results demonstrated that the best models identified by BMA performed significantly better than the full models estimated by frequentist regression in terms of AIC, BIC, and cross-validation accuracy. We discuss theoretical and methodological implications of the findings for future studies addressing character strengths and moral functioning.

From the Discussion

Although the postconventional reasoning was relatively weakly associated with character strengths, several character strengths were still significantly associated with it. We were able to discover its association with several character strengths, particularly those within the domain of intellectual ability.One possible explanation is that intellectual strengths enable people to evaluate moral issues from diverse perspectives and appreciate moral values and principles beyond existing conventions and norms (Kohlberg, 1968). Having such intellectual strengths can thus allow them to engage in sophisticated moral reasoning. For instance, wisdom, judgment, and curiosity demonstrated positive correlation with postconventional reasoning as Han (2019) proposed.  Another possible explanation is that the DIT focuses on hypothetical, abstract moral reasoning, instead of decision making in concrete situations (Rest et al., 1999b). Therefore, the emergence of positive association between intellectual strengths and postconventional moral reasoning in the current study is plausible.

The trend of positive relationships between character strengths and moral functioning indicators was also reported from best model exploration through BMA.  First, postconventional reasoning was best predicted by intellectual strengths, curiosity, and wisdom, plus kindness. Second, EC was positively predicted by love, kindness, and gratitude. Third, PT was positively associated with wisdom and gratitude in the best model. Fourth, moral internalization was positively predicted by kindness and gratitude.

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.

Thursday, February 3, 2022

Neural computations in children’s third-party interventions are modulated by their parents’ moral values

Kim, M., Decety, J., Wu, L. et al.
npj Sci. Learn. 6, 38 (2021). 
https://doi.org/10.1038/s41539-021-00116-5

Abstract

One means by which humans maintain social cooperation is through intervention in third-party transgressions, a behaviour observable from the early years of development. While it has been argued that pre-school age children’s intervention behaviour is driven by normative understandings, there is scepticism regarding this claim. There is also little consensus regarding the underlying mechanisms and motives that initially drive intervention behaviours in pre-school children. To elucidate the neural computations of moral norm violation associated with young children’s intervention into third-party transgression, forty-seven preschoolers (average age 53.92 months) participated in a study comprising of electroencephalographic (EEG) measurements, a live interaction experiment, and a parent survey about moral values. This study provides data indicating that early implicit evaluations, rather than late deliberative processes, are implicated in a child’s spontaneous intervention into third-party harm. Moreover, our findings suggest that parents’ values about justice influence their children’s early neural responses to third-party harm and their overt costly intervention behaviour.

From the Discussion

Our study further provides evidence that children, as young as 3 years of age, can enact costly third-party intervention by protesting and reporting. Previous research has shown that young children from age 3 enact third-party punishment to transgressors shown in video or puppets9,10. In the present study, in the context of real-life transgression experiment, even the youngest participant (41 months old) engaged in costly intervention, by hinting disapproval to the adult transgressor (why are you doing that?) and subsequently reporting the damage when being prompted. During the experiment, confounding factors such as a sense of ‘responsibility’, were avoided by keeping the person playing the ‘research assistant’ role out of the room when the transgression occurred. Furthermore, when leaving the room, the ‘research assistant’ did not assign the children any special role to police or monitor the actions of the ‘visitor’ (who would transgress). Moreover, the transgressor was not an acquaintance of the child, and the book was said to belong to a university (not a child’s school nor researchers), hence giving little sense of in-group/out-group membership11,60. Also, the participating children would likely attribute ‘power’ and ‘authority’ to the visitor/transgressor, as an adult26. Nevertheless, in the real-life experimental context, 34.8% of children explicitly protested to the adult wrong-doer.

(cut)

It should be emphasized that parent’s cognitive empathy was not implicated in the child’s neural computations of moral norms or their spontaneous intervention behaviour. However, parents’ cognitive empathy had a positive correlation with a child’s effortful control and their subsequent report behaviour. This distinct contribution made by two different dispositions (cognitive empathy and justice sensitivity) suggests that parenting strategies necessary to enhance a child’s moral development require both aspects: perspective-taking and understanding of moral values. 

Tuesday, October 19, 2021

Why Empathy Is Not a Reliable Source of Information in Moral Decision Making

Decety, J. (2021).
Current Directions in Psychological Science. 
https://doi.org/10.1177/09637214211031943

Abstract

Although empathy drives prosocial behaviors, it is not always a reliable source of information in moral decision making. In this essay, I integrate evolutionary theory, behavioral economics, psychology, and social neuroscience to demonstrate why and how empathy is unconsciously and rapidly modulated by various social signals and situational factors. This theoretical framework explains why decision making that relies solely on empathy is not ideal and can, at times, erode ethical values. This perspective has social and societal implications and can be used to reduce cognitive biases and guide moral decisions.

From the Conclusion

Empathy can encourage overvaluing some people and ignoring others, and privileging one over many. Reasoning is therefore essential to filter and evaluate emotional responses that guide moral decisions. Understanding the ultimate causes and proximate mechanisms of empathy allows characterization of the kinds of signals that are prioritized and identification of situational factors that exacerbate empathic failure. Together, this knowledge is useful at a theoretical level, and additionally provides practical information about how to reframe situations to activate alternative evolved systems in ways that promote normative moral conduct compatible with current societal aspirations. This conceptual framework advances current understanding of the role of empathy in moral decision making and may inform efforts to correct personal biases. Becoming aware of one’s biases is not the most effective way to manage and mitigate them, but empathy is not something that can be ignored. It has an adaptive biological function, after all.

Monday, October 18, 2021

Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots

M. Asada
Philosophies 2019, 4(3), 38
https://doi.org/10.3390/philosophies4030038

Abstract

In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed.

Discussion

To tackle the issue of consciousness, this study attempted to represent it as a phenomenon of the developmental process of artificial empathy for pain and moral behavior generation. The conceptual model for the former is given by, while the latter is now a story of fantasy. If a robot is regarded as a moral being that is capable of exhibiting moral behavior with others, is it deserving of receiving moral behavior from them? If so, can we agree that such robots have conscious minds? This is an issue of ethics towards robots, and is also related to the legal system. Can we ask such robots to accept a sort of responsibility for any accident they commit? If so, how? These issues arise when we introduce robots who are qualified as a moral being with conscious minds into our society.

Before these issues can be considered, there are so many technical issues to address. Among them, the following should be addressed intensively.
  1. Associate the sensory discrimination of pain with the affective and motivational responses to pain (the construction of the pain matrix and memory dynamics).
  2. Recall the experience when a painful situation of others is observed.
  3. Generate appropriate behavior to reduce the pain.

Friday, August 27, 2021

It’s hard to be a moral person. Technology is making it harder.

Sigal Samuel
vox.com
Originally posted 3 Aug 21

Here is an excerpt:

People who point out the dangers of digital tech are often met with a couple of common critiques. The first one goes like this: It’s not the tech companies’ fault. It’s users’ responsibility to manage their own intake. We need to stop being so paternalistic!

This would be a fair critique if there were symmetrical power between users and tech companies. But as the documentary The Social Dilemma illustrates, the companies understand us better than we understand them — or ourselves. They’ve got supercomputers testing precisely which colors, sounds, and other design elements are best at exploiting our psychological weaknesses (many of which we’re not even conscious of) in the name of holding our attention. Compared to their artificial intelligence, we’re all children, Harris says in the documentary. And children need protection.

Another critique suggests: Technology may have caused some problems — but it can also fix them. Why don’t we build tech that enhances moral attention?

“Thus far, much of the intervention in the digital sphere to enhance that has not worked out so well,” says Tenzin Priyadarshi, the director of the Dalai Lama Center for Ethics and Transformative Values at MIT.

It’s not for lack of trying. Priyadarshi and designers affiliated with the center have tried creating an app, 20 Day Stranger, that gives continuous updates on what another person is doing and feeling. You get to know where they are, but never find out who they are. The idea is that this anonymous yet intimate connection might make you more curious or empathetic toward the strangers you pass every day.

They also designed an app called Mitra. Inspired by Buddhist notions of a “virtuous friend” (kalyāṇa-mitra), it prompts you to identify your core values and track how much you acted in line with them each day. The goal is to heighten your self-awareness, transforming your mind into “a better friend and ally.”

I tried out this app, choosing family, kindness, and creativity as the three values I wanted to track. For a few days, it worked great. Being primed with a reminder that I value family gave me the extra nudge I needed to call my grandmother more often. But despite my initial excitement, I soon forgot all about the app.

Wednesday, August 11, 2021

The sympathetic plot, its psychological origins, and implications for the evolution of fiction

Singh, M. (2021). 
Emotion Review, 13(3), 183–198.
https://doi.org/10.1177/17540739211022824

Abstract

For over a century, scholars have compared stories and proposed universal narrative patterns. Despite their diversity, nearly all of these projects converged on a common structure: the sympathetic plot. The sympathetic plot describes how a goal-directed protagonist confronts obstacles, overcomes them, and wins rewards. Stories with these features frequently exhibit other common elements, including an adventure and an orphaned main character. Here, I identify and aim to explain the sympathetic plot. I argue that the sympathetic plot is a technology for entertainment that works by engaging two sets of psychological mechanisms. First, it triggers mechanisms for learning about obstacles and how to overcome them. It builds interest by confronting a protagonist with a problem and induces satisfaction when the problem is solved. Second, it evokes sympathetic joy. It establishes the protagonist as an ideal cooperative partner pursuing a justifiable goal, convincing audiences that they should assist the character. When the protagonist succeeds, they receive rewards, and audiences feel sympathetic joy, an emotion normally triggered when cooperative partners triumph. The psychological capacities underlying the sympathetic plot are not story-specific adaptations. Instead, they evolved for purposes like learning and cooperation before being co-opted for entertainment by storytellers and cultural evolution.

Summary

Why do people everywhere tell stories about abused stepdaughters who marry royalty and revel in awarded riches? Whence all the virtuous orphans? The answer, I have argued, is entertainment.Tales in which a likable main character overcomes difficulty and reaps rewards create a compelling cognitive dreamscape. They twiddle psychological mechanisms involved in learning and cooperation, narrowing attention and inducing sympathetic joy. Story imitates life, or at least the elements of life to which we’ve evolved pleasurable responses.


Note: Many times, our patients narrate stories of their lives.  Narrative patterns may help psychologists understand internal motivations of our patients, how they view their life trajectories, and how we can help them alter their storylines for improved mental health.

Tuesday, June 22, 2021

Against Empathy Bias: The Moral Value of Equitable Empathy

Fowler, Z., Law, K. F., & Gaesser, B.
Psychological Science
Volume: 32 issue: 5, page(s): 766-779

Abstract

Empathy has long been considered central in living a moral life. However, mounting evidence has shown that empathy is often biased towards (i.e., felt more strongly for) close and similar others, igniting a debate over whether empathy is inherently morally flawed and should be abandoned in efforts to strive towards greater equity. This debate has focused on whether empathy limits the scope of our morality, with little consideration of whether it may be our moral beliefs limiting our empathy. Across two studies conducted on Amazon’s Mechanical Turk (N= 604), we investigate moral judgments of biased and equitable feelings of empathy. We observed a moral preference for empathy towards socially close over distant others. However, feeling equal empathy for all is seen as the most morally and socially valuable. These findings provide new theoretical insight into the relationship between empathy and morality with implications for navigating towards a more egalitarian future.

General Discussion

The present studies investigated moral judgments of socially biased and equitable feelings of empathy in hypothetical vignettes. The results showed that moral judgments of empathy are biased towards preferring more empathy for a socially close over socially distant individual. Despite this bias in moral judgments, however, people consistently judged feeling equal empathy as the most morally right. These findings generalized from judgments of others’ empathy for targets matched on objective social distance to judgments of one’s own empathy for targets that were personally-tailored and matched on subjective social distance across subjects.  Further, participants most desired to affiliate with someone who felt equal empathy. We also found that participants’ desire to affiliate with the actor in the vignette mirrored their moral judgments of empathy.

Saturday, May 8, 2021

When does empathy feel good?

Ferguson, A. M., Cameron, D., & Inzlicht, M. 
(2021, March 12). 
https://doi.org/10.31234/osf.io/nfuz2

Abstract

Empathy has many benefits. When we are willing to empathize, we are more likely to act prosocially (and receive help from others in the future), to have satisfying relationships, and to be viewed as moral actors. Moreover, empathizing in certain contexts can actually feel good, regardless of the content of the emotion itself—for example, we might feel a sense of connection after empathizing with and supporting a grieving friend. Does this feeling come from empathy itself, or from its real and implied consequences? We suggest that the rewards that flow from empathy confound our experience of it, and that the pleasant feelings associated with engaging empathy are extrinsically tied to the results of some action, not to the experience of empathy itself. When we observe people’s decisions related to empathy in the absence of these acquired rewards, as we can in experimental settings, empathy appears decidedly less pleasant. Empathy has many benefits. When we are willing to empathize, we are more likely to act prosocially (and receive help from others in the future), to have satisfying relationships, and to be viewed as moral actors. Moreover, empathizing in certain contexts can actually feel good, regardless of the content of the emotion itself—for example, we might feel a sense of connection after empathizing with and supporting a grieving friend. Does this feeling come from empathy itself, or from its real and implied consequences? We suggest that the rewards that flow from empathy confound our experience of it, and that the pleasant feelings associated with engaging empathy are extrinsically tied to the results of some action, not to the experience of empathy itself. When we observe people’s decisions related to empathy in the absence of these acquired rewards, as we can in experimental settings, empathy appears decidedly less pleasant.