Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Morality. Show all posts
Showing posts with label Morality. Show all posts

Sunday, September 7, 2025

Meaningful Psychedelic Experiences Predict Increased Moral Expansiveness

Olteanu, W., & Moreton, S. G. (2025).
Journal of Psychoactive Drugs, 1–9.

Abstract

There has been growing interest in understanding the psychological effects of psychedelic experiences, including their potential to catalyze significant shifts in moral cognition. This retrospective study examines how meaningful psychedelic experiences are related to changes in moral expansiveness and investigates the role of acute subjective effects as predictors of these changes. We found that meaningful psychedelic experiences were associated with self-reported increases in moral expansiveness. Changes in moral expansiveness were positively correlated with reported mystical experiences, ego dissolution, as well as feeling moved and admiration during the experience. Additionally, heightened moral expansiveness was associated with longer term shifts in the propensity to experience the self-transcendent positive emotions of admiration and awe. Future research should further investigate the mechanisms underlying these changes and explore how different types of psychedelic experiences might influence moral decision-making and behavior over time.

Here are some thoughts:

This article explores the relationship between psychedelic experiences and shifts in moral cognition, specifically moral expansiveness—the extent to individuals extend moral concern to a broader range of entities, including humans, animals, and nature. The study found that meaningful psychedelic experiences were associated with self-reported increases in moral expansiveness, with these changes linked to acute subjective effects such as mystical experiences, ego dissolution, and self-transcendent emotions like admiration and awe. The research suggests that psychedelics may facilitate profound shifts in moral attitudes by fostering feelings of interconnectedness and unity, which endure beyond the experience itself.

This study is important for practicing psychologists as it highlights the potential therapeutic and transformative effects of psychedelics on moral and ethical perspectives. Understanding these mechanisms can inform therapeutic approaches, particularly for clients struggling with rigid moral boundaries, lack of empathy, or disconnection from others and the environment. The findings also underscore the role of self-transcendent emotions in promoting prosocial behaviors and well-being, offering insights into interventions that could cultivate such emotions. However, psychologists must approach this area cautiously, considering the legal and ethical implications of psychedelic use, and remain informed about emerging research to guide clients responsibly. The study opens avenues for further exploration into how psychedelic-assisted therapy might address moral and relational challenges in clinical practice.

Thursday, July 24, 2025

The uselessness of AI ethics

Munn, L. (2022).
AI And Ethics, 3(3), 869–877.

Abstract

As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.

Here are some thoughts:

This paper is important for multiple reasons. First, it critically examines how artificial intelligence—increasingly embedded in areas like healthcare, education, law enforcement, and social services—can perpetuate racial, gendered, and socioeconomic biases, often under the guise of neutrality and objectivity. These systems can influence or even determine outcomes in mental health diagnostics, hiring practices, criminal justice risk assessments, and educational tracking, all of which have profound psychological implications for individuals and communities. Psychologists, particularly those working in clinical, organizational, or forensic fields, must understand how these technologies shape behavior, identity, and access to resources.

Second, the article highlights how ethical principles guiding AI development are often vague, inconsistently applied, and disconnected from real-world impacts. This raises concerns about the psychological effects of deploying systems that claim to promote fairness or well-being but may actually deepen inequalities or erode trust in institutions. For psychologists involved in policy-making or advocacy, this underscores the need to push for more robust, evidence-based frameworks that consider human behavior, cultural context, and systemic oppression.

Finally, the piece calls attention to the broader sociopolitical systems in which AI operates, urging a shift from abstract ethical statements to concrete actions that address structural inequities. This aligns with growing interest in community psychology and critical approaches that emphasize social justice and the importance of centering marginalized voices. Ultimately, understanding the limitations and risks of current AI ethics frameworks allows psychologists to better advocate for humane, equitable, and psychologically informed technological practices.

Wednesday, July 16, 2025

The moral blueprint is not necessary for STEM wisdom

Kachhiyapatel, N., & Grossmann, I. (2025, June 11).
PsyArXiv

Abstract

How can one bring wisdom into STEM education? One popular position holds that wise judgment follows from teaching morals and ethics in STEM. However, wisdom scholars debate the causal role of morality and whether cultivating a moral blueprint is a necessary condition for wisdom. Some philosophers and education scientists champion this view, whereas social psychologists and cognitive scientists argue that moral features like prosocial behavior are reinforcing factors or outcomes of wise judgment rather than pre-requisites. This debate matters particularly for science and technology, where wisdom-demanding decisions typically involve incommensurable values and radical uncertainty. Here, we evaluate these competing positions through four lines of evidence. First, empirical research shows that heightened moralization aligns with foolish rejection of scientific claims, political polarization, and value extremism. Second, economic scholarship on folk theorems demonstrates that wisdom-related metacognition—perspective-integration, context-sensitivity, and balancing long- and short-term goals—can give rise to prosocial behavior without an apriori moral blueprint. Third, in real life moral values often compete, making metacognition indispensable to balance competing interests for the common good. Fourth, numerous scientific domains require wisdom yet operate beyond moral considerations. We address potential objections about immoral and Machiavellian applications of blueprint-free wisdom accounts. Finally, we explore implications for giftedness: what exceptional wisdom looks like in STEM context, and how to train it. Our analysis suggests that STEM wisdom emerges not from prescribed moral codes but from metacognitive skills that enable navigation of complexity and uncertainty.

Here are some thoughts:

This article challenges the idea that wisdom in STEM and other complex domains requires a fixed moral blueprint. Instead, it highlights perspectival metacognition—skills like perspective-taking, intellectual humility, and balancing short- and long-term outcomes—as the core of wise judgment.

For psychologists, this suggests that strong moral convictions alone can sometimes impair wisdom by fostering rigidity or polarization. The findings support a shift in ethics training, supervision, and professional development toward cultivating reflective, context-sensitive thinking. Rather than relying on standardized assessments or fixed values, fostering metacognitive skills may better prepare psychologists and their clients to navigate complex, high-stakes decisions with wisdom and flexibility.

Friday, July 4, 2025

The Psychology of Moral Conviction

Skitka, L. J.,  et al. (2020).
Annual Review of Psychology, 72(1),
347–366.

Abstract

This review covers theory and research on the psychological characteristics and consequences of attitudes that are experienced as moral convictions, that is, attitudes that people perceive as grounded in a fundamental distinction between right and wrong. Morally convicted attitudes represent something psychologically distinct from other constructs (e.g., strong but nonmoral attitudes or religious beliefs), are perceived as universally and objectively true, and are comparatively immune to authority or peer influence. Variance in moral conviction also predicts important social and political consequences. Stronger moral conviction about a given attitude object, for example, is associated with greater intolerance of attitude dissimilarity, resistance to procedural solutions for conflict about that issue, and increased political engagement and volunteerism in that attitude domain. Finally, we review recent research that explores the processes that lead to attitude moralization; we integrate these efforts and conclude with a new domain theory of attitude moralization.

Here are some thoughts:

The article provides valuable insights into how individuals perceive and process attitudes grounded in fundamental beliefs about right and wrong. It distinguishes morally convicted attitudes from other constructs, such as strong but nonmoral attitudes or religious beliefs, by highlighting that moral convictions are viewed as universally and objectively true and are relatively resistant to authority or peer influence. These convictions often lead to significant social and political consequences, including intolerance of differing views, resistance to compromise, increased political engagement, and heightened emotional responses. The article also explores the processes of attitude moralization—how an issue becomes infused with moral significance—and demoralization, offering a domain theory of attitude moralization that suggests different pathways depending on whether the initial attitude is perceived as a preference, convention, or existing moral imperative.

This knowledge is critically important to practicing psychologists because it enhances their understanding of how moral convictions shape behavior, decision-making, and interpersonal dynamics. For instance, therapists working with clients on issues involving conflict resolution, values clarification, or behavioral change must consider the role of moral conviction in shaping resistance to persuasion or difficulty in compromising. Understanding moral conviction can also aid psychologists in navigating cultural differences, addressing polarization in group settings, and promoting tolerance by recognizing how individuals intuitively perceive certain issues as moral. Furthermore, as society grapples with increasingly divisive sociopolitical challenges—such as climate change, immigration, and public health crises—psychologists can use these insights to foster dialogue, reduce moral entrenchment, and encourage constructive engagement. Ultimately, integrating the psychology of moral conviction into practice allows for more nuanced, empathetic, and effective interventions across clinical, organizational, and community contexts.

Monday, June 30, 2025

Neural Processes Linking Interoception to Moral Preferences Aligned with Group Consensus

Kim, J., & Kim, H. (2025).
Journal of Neuroscience, e1114242025.

Abstract

Aligning one’s decisions with the prevailing norms and expectations of those around us constitutes a fundamental facet of moral decision-making. When faced with conflicting moral values, one adaptive approach is to rely on intuitive moral preference. While there has been theoretical speculation about the connection between moral preference and an individual’s awareness of introspective interoceptive signals, it has not been empirically examined. This study examines the relationships between individuals’ preferences in moral dilemmas and interoception, measured with self-report, heartbeat detection task, and resting-state fMRI. Two independent experiments demonstrate that both male and female participants’ interoceptive awareness and accuracy are associated with their moral preferences aligned with group consensus. In addition, the fractional occupancies of the brain states involving the ventromedial prefrontal cortex and the precuneus during rest mediate the link between interoceptive awareness and the degree of moral preferences aligned to group consensus. These findings provide empirical evidence of the neural mechanism underlying the link between interoception and moral preferences aligned with group consensus.

Significance statement

We investigate the intricate link between interoceptive ability to perceive internal bodily signals and decision-making when faced with moral dilemmas. Our findings reveal a significant correlation between the accuracy and awareness of interoceptive signals and the degree of moral preferences aligned with group consensus. Additionally, brain states involving the ventromedial prefrontal cortex and precuneus during rest mediate the link between interoceptive awareness and moral preferences aligned with group consensus. These findings provide empirical evidence that internal bodily signals play a critical role in shaping our moral intuitions according to others’ expectations across various social contexts.

Here are some thoughts:

A recent study highlighted that our moral decisions may be influenced by our body's internal signals, particularly our heartbeat. Researchers found that individuals who could accurately perceive their own heartbeats tended to make moral choices aligning with the majority, regardless of whether those choices were utilitarian or deontological. This implies that bodily awareness might unconsciously guide us toward socially accepted norms. Brain scans supported this, showing increased activity in areas associated with evaluation and judgment, like the medial prefrontal cortex, in those more attuned to their internal signals. While the study's participants were exclusively Korean college students, limiting generalizability, the findings open up intriguing possibilities about the interplay between bodily awareness and moral decision-making.

Tuesday, May 6, 2025

Patriotic morality: links between conventional patriotism, glorification, constructive patriotism, and moral values and decisions

Kołeczek, M., Sekerdej, M et al. (2025).
Self and Identity, 1–22.

Abstract

To test the moral critique of patriotism, we explored patriots’ moral values and choices. Study 1 (N = 1,062) examined the links between three types of patriotism – conventional patriotism, glorification of the nation, and constructive patriotism – and moral values. Glorification was positively linked with binding values, but negatively with fairness. Conventional patriotism was positively linked with harm, loyalty, and authority and constructive patriotism with harm, fairness, and loyalty. Study 2 (N = 1,041) examined the links between patriotism and moral decisions. We presented participants with political dilemmas that required choosing one moral value over another. Glorification was linked with choosing binding over individualizing values. Conventional patriotism was linked with choosing authority over individualizing values and individualizing values over loyalty.

Here are some thoughts:

A study examined the moral dimensions of patriotism, finding that different types carry varying moral implications. Glorification, prioritizing loyalty and authority, correlates with decreased concern for fairness and harm prevention. Conventional patriotism relates to both loyalty and harm prevention without clear preference. Constructive patriotism uniquely associates with fairness. The study suggests uncritical, nationalistic patriotism can overshadow individual welfare and fair treatment.

Saturday, April 26, 2025

Culture points the moral compass: Shared basis of culture and morality

Matsuo, A., & Brown, C. M. (2022).
Culture and Brain, 10(2), 113–139.

Abstract

The present work reviews moral judgment from the perspective of culture. Culture is a dynamic system of human beings interacting with their environment, and morality is both a product of this system and a means of maintaining it. When members of a culture engage in moral judgment, they communicate their “social morality” and gain a reputation as a productive member who contributes to the culture’s prosperity. People in different cultures emphasize different moral domains, which is often understood through the individualism-collectivism distinction that is widely utilized in cultural psychology. However, traditional morality research lacks the interactive perspective of culture, where people communicate with shared beliefs about what is good or bad. As a consequence, past work has had numerous limitations and even potential confounds created by methodologies that are grounded in the perspective of WEIRD (i.e., Western, Educated, Industrialized, Rich and Democratic) cultures. Great attention should be paid to the possibly misleading assumption that researchers and participants share the same understanding of the stimuli. We must address this bias in sampling and in the minds of researchers and better clarify the concept of culture in intercultural morality research. The theoretical and practical findings from research on culture can then contribute to a better understanding of the mechanisms of moral judgment.

The article is paywalled. So, I tried to give more of a summary. Here it is:

This article discusses moral judgment from a cultural perspective. The authors argue that morality is a product of culture and helps to maintain it. They claim that people from different cultures emphasize different moral domains, which is often understood using the individualism-collectivism distinction. The authors also suggest that traditional morality research lacks an interactive perspective of culture, where people communicate shared beliefs about what is good or bad, and that this past research has had limitations and potential confounds due to methodologies that are grounded in WEIRD cultures.    

The authors discuss theories of moral judgment, including Lawrence Kohlberg’s theory of stages of moral development, the social intuitionist model, and moral pluralism. They claim that moral judgment is a complex process involving self-recognition, social cognition, and decision-making and that the brain is designed to process multiple moralities in different ways. They also explore the social function of morality, stating that behaving morally according to the standards of one’s group helps people be included in the group, and moral norms are used to identify desirable and undesirable group membership.    

In a significant part of the article, the authors discuss the concept of culture, defining it as a structured system of making sense of the environment, which shapes individuals in order to fit into their environment. They explain that the need to belong is a basic human motivation, and people form groups as a means of survival and reproduction. Norms applied to a particular group regulate group members’ behaviors, and culture emerges from these norms. The authors use the individualism-collectivism dimension, a common concept in cultural psychology, to explain how people from different cultures perceive and interpret the world in different ways. They claim that culture is a dynamic interaction between humans and their environment and that moral judgment achieves its social function because people assume that ingroup members share common representations of what is right or wrong. 

Tuesday, April 22, 2025

Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI

Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022).
Journal of Business Ethics, 178(4), 1027–1041.

Abstract

Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment.

Here are some thoughts:

If you watched the TV series Westworld on HBO, then this research makes a great deal more sense.

This study investigates how individuals morally behave toward AI agents and self-service machines, specifically examining individuals' moral concerns and behaviors when interacting with technology versus humans in a retail setting. The research demonstrates that moral intention, such as the intention to report an error, is less likely to arise for AI checkout and self-checkout machines compared with human checkout scenarios. Furthermore, the study reveals that moral intention decreases as people perceive the machine to be less humanlike. This decline in morality is attributed to reduced guilt displayed toward these new technologies. Essentially, the non-human nature of the interaction evokes a decreased feeling of guilt, which ultimately leads to diminished moral behavior. These findings provide valuable insights into how technological advancements influence consumer behaviors and offer guidance for businesses and retailers in understanding moral intentions within various shopping environments.

These findings carry several important implications for psychologists. They underscore the nuanced ways in which technology shapes human morality and ethical decision-making. The research suggests that the perceived "humanness" of an entity, whether it's a human or an AI, significantly influences the elicitation of moral behavior. This has implications for understanding social cognition, anthropomorphism, and how individuals form relationships with non-human entities. Additionally, the role of guilt in moral behavior is further emphasized, providing insights into the emotional and cognitive processes that underlie ethical conduct. Finally, these findings can inform the development of interventions or strategies aimed at promoting ethical behavior in technology-mediated interactions, a consideration that is increasingly relevant in a world characterized by the growing prevalence of AI and automation.

Monday, April 21, 2025

Human Morality Is Based on an Early-Emerging Moral Core

Woo, B. M., Tan, E., & Hamlin, J. K. (2022).
Annual Review of Developmental Psychology, 
4(1), 41–61.

Abstract

Scholars from across the social sciences, biological sciences, and humanities have long emphasized the role of human morality in supporting cooperation. How does morality arise in human development? One possibility is that morality is acquired through years of socialization and active learning. Alternatively, morality may instead be based on a “moral core”: primitive abilities that emerge in infancy to make sense of morally relevant behaviors. Here, we review evidence that infants and toddlers understand a variety of morally relevant behaviors and readily evaluate agents who engage in them. These abilities appear to be rooted in the goals and intentions driving agents’ morally relevant behaviors and are sensitive to group membership. This evidence is consistent with a moral core, which may support later social and moral development and ultimately be leveraged for human cooperation.

Here are some thoughts:

This article explores the origins of human morality, suggesting it's rooted in an early-emerging moral core rather than solely acquired through socialization and learning. The research reviewed indicates that even infants and toddlers demonstrate an understanding of morally relevant behaviors, evaluating agents based on their actions. This understanding is linked to the goals and intentions behind these behaviors and is influenced by group membership.

This study of morality is important for psychologists because morality is a fundamental aspect of human behavior and social interactions. Understanding how morality develops can provide insights into various psychological processes, such as social cognition, decision-making, and interpersonal relationships. The evidence supporting a moral core in infancy suggests that some aspects of morality may be innate, challenging traditional views that morality is solely a product of learning and socialization. This perspective can inform interventions aimed at promoting prosocial behavior and preventing antisocial behavior. Furthermore, understanding the early foundations of morality can help psychologists better understand the development of moral reasoning and judgment across the lifespan.

Saturday, April 19, 2025

Morality in social media: A scoping review

Neumann, D., & Rhodes, N. (2023).
New Media & Society, 26(2), 1096-1126.
(Original work published 2024)

Abstract

Social media platforms have been adopted rapidly into our current culture and affect nearly all areas of our everyday lives. Their prevalence has raised questions about the influence of new communication technologies on moral reasoning, judgments, and behaviors. The present scoping review identified 80 articles providing an overview of scholarly work conducted on morality in social media. Screening for research that explicitly addressed moral questions, the authors found that research in this area tends to be atheoretical, US-based, quantitative, cross-sectional survey research in business, psychology, and communication journals. Findings suggested a need for increased theoretical contributions. The authors identified new developments in research analysis, including text scraping and machine coding, which may contribute to theory development. In addition, diversity across disciplines allows for a broad picture in this research domain, but more interdisciplinarity might be needed to foster creative approaches to this study area.

Here are some thoughts:

This article is a scoping review that analyzes 80 articles focusing on morality in social media. The review aims to give researchers in different fields an overview of current research. The authors found that research in this area is generally atheoretical, conducted in the US, uses quantitative methods, and is published in business, psychology, and communication journals. The review also pointed out new methods of research analysis, like text scraping and machine coding, which could help in developing theories.

Social media has rapidly become a major part of our culture, impacting almost every aspect of daily life. It provides digital spaces where people can learn socially by watching and judging the moral behaviors of others. The easy access to information about moral and immoral actions through social media can significantly influence users' moral behaviors, judgments, reasoning, emotions, and self-views. It's vital for psychologists to understand how social media affects moral reasoning, judgments, and behaviors. This understanding is key to addressing any negative impacts of social media, especially on young people, and to creating strategies that encourage positive online behavior.

Tuesday, March 11, 2025

Moral Challenges for Psychologists Working in Psychology and Law

Allan A. (2018).
Psychiatry, psychology, and law:
an interdisciplinary journal of the Australian and 
New Zealand Association of Psychiatry,
Psychology and Law, 25(3), 485–499.

Abstract

States have an obligation to protect themselves and their citizens from harm, and they use the coercive powers of law to investigate threats, enforce rules and arbitrate disputes, thereby impacting on people's well-being and legal rights and privileges. Psychologists as a collective have a responsibility to use their abilities, knowledge, skill and experience to enhance law's effectiveness, efficiency, and reliability in preventing harm, but their professional behaviour in this collaboration must be moral. They could, however, find their personal values to be inappropriate or there to be insufficient moral guides and could find it difficult to obtain definitive moral guidance from law. The profession's ethical principles do, however, provide well-articulated, generally accepted and profession-appropriate guidance, but practitioners might encounter moral issues that can only be solved by the profession as a whole or society.

Here are some thoughts:

While psychologists play a crucial role in assisting the law to protect society through assessments, risk evaluations, and expert opinions, their work often intersects with coercive practices that can impact individual rights and well-being.  Psychologists must navigate the tension between societal protection and respect for human dignity, especially when involved in involuntary detention, forensic interviews, and risk assessments.  They are guided by core ethical principles such as non-maleficence, justice, fidelity, and respect, but these principles can conflict, requiring careful ethical decision-making.  Challenges are particularly pronounced in areas like risk assessment, where tools may be flawed or culturally biased, and where psychologists might face pressure to align with legal expectations, potentially compromising their objectivity and professional integrity.

The article emphasizes the need for psychologists in legal settings to maintain public trust, uphold human rights principles, and utilize structured, evidence-based, and culturally sensitive methods in their practice.  Beyond individual ethical conduct, psychologists have a responsibility to advocate for systemic improvements, including better assessment tools for diverse populations and robust ethical guidelines. Ultimately, the article underscores that psychologists in law must continually engage in moral reflection, striving for a just and effective legal system while minimizing harm and ensuring their practice remains ethically sound and socially responsible, guided by both professional ethics and universal human rights frameworks.

Tuesday, March 4, 2025

The Multidimensionality of moral identity – toward a broad characterization of the moral self

Tissot, T. T., et al. (2025).
Ethics & Behavior, 1–23.

Abstract

The present study explored the multidimensionality of moral identity. In four studies (N = 1,159), we compiled a comprehensive list of moral traits, analyzed their factorial structure, and established relationships between the factorial dimensions and outcome variables. The resulting dimensions are Connectedness, Truthfulness, Care, and Righteousness. To examine relations to personality traits and pro- and antisocial inclinations we developed a new instrument, the Moral Identity Profile (MIP). Our results show distinctive relationships for the four dimensions, which challenge previous unidimensional conceptualizations of moral identity. We discuss implications, limitations, and how our conceptualization reaffirms the social aspect of morality.

The article is paywalled and there is no pdf available online. :(

Please contact the author for a copy.

Here are some thoughts:

This study challenges traditional views of moral identity, emphasizing its deeply social nature rather than framing it solely through moral dilemmas or more cognitive moral reasoning skills. Analyzing data from 1,159 participants, researchers identified four key dimensions of moral identity—Connectedness, Truthfulness, Care, and Righteousness—each reflecting how individuals integrate morality into their relationships and communities. This multidimensional perspective shifts away from abstract reasoning and instead highlights the ways in which moral identity is shaped through social interactions, emotional bonds, and shared values. To advance research in this area, the team developed the Moral Identity Profile (MIP), a tool designed to assess how these dimensions manifest in social contexts. By acknowledging the inherently relational aspects of morality, this work offers fresh insights into how moral identity influences interpersonal behavior, fosters social cohesion, and shapes ethical engagement within communities.

Wednesday, February 19, 2025

The Moral Psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A. (2023).
Annual Review of Psychology, 75(1), 653–675.

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Here are some thoughts:

This article delves into the evolving moral landscape shaped by artificial intelligence (AI). As AI technology progresses rapidly, it introduces a new category for moral consideration: intelligent machines.

Machines as moral agents are capable of making decisions that have significant moral implications. This includes scenarios where AI systems can inadvertently cause harm through errors, such as misdiagnosing a medical condition or misclassifying individuals in security contexts. The authors highlight that societal expectations for these machines are often unrealistically high; people tend to require AI systems to outperform human capabilities significantly while simultaneously overestimating human error rates. This disparity raises critical questions about how many mistakes are acceptable from machines in life-and-death situations and how these errors are distributed among different demographic groups.

In their role as moral patients, machines become subjects of human moral behavior. This perspective invites exploration into how humans interact with AI—whether cooperatively or competitively—and the potential biases that may arise in these interactions. For instance, there is a growing concern about algorithmic bias, where certain demographic groups may be unfairly treated by AI systems due to flawed programming or data sets.

Lastly, machines serve as moral proxies, acting as intermediaries in human interactions. This role allows individuals to delegate moral decision-making to machines or use them to mask unethical behavior. The implications of this are profound, as it raises ethical questions about accountability and the extent to which humans can offload their moral responsibilities onto AI.

Overall, the article underscores the urgent need for a deeper understanding of the psychological dimensions associated with AI's integration into society. As encounters between humans and intelligent machines become commonplace, addressing issues of trust, bias, and ethical alignment will be crucial in shaping a future where AI can be safely and effectively integrated into daily life.

Saturday, February 15, 2025

Does One Emotion Rule All Our Ethical Judgments

Elizabeth Kolbert
The New Yorker
Originally published 13 Jan 25

Here is an excerpt:

Gray describes himself as a moral psychologist. In contrast to moral philosophers, who search for abstract principles of right and wrong, moral psychologists are interested in the empirical matter of people’s perceptions. Gray writes, “We put aside questions of how we should make moral judgments to examine how people do make more moral judgments.”

For the past couple of decades, moral psychology has been dominated by what’s known as moral-foundations theory, or M.F.T. According to M.F.T., people reach ethical decisions on the basis of mental structures, or “modules,” that evolution has wired into our brains. These modules—there are at least five of them—involve feelings like empathy for the vulnerable, resentment of cheaters, respect for authority, regard for sanctity, and anger at betrayal. The reason people often arrive at different judgments is that their modules have developed differently, either for individual or for cultural reasons. Liberals have come to rely almost exclusively on their fairness and empathy modules, allowing the others to atrophy. Conservatives, by contrast, tend to keep all their modules up and running.

If you find this theory implausible, you’re not alone. It has been criticized on a wide range of grounds, including that it is unsupported by neuroscience. Gray, for his part, wants to sweep aside moral-foundations theory, plural, and replace it with moral-foundation theory, singular. Our ethical judgments, he suggests, are governed not by a complex of modules but by one overriding emotion. Untold generations of cowering have written fear into our genes, rendering us hypersensitive to threats of harm.

“If you want to know what someone sees as wrong, your best bet is to figure out what they see as harmful,” Gray writes at one point. At another point: “All people share a harm-based moral mind.” At still another: “Harm is the master key of morality.”

If people all have the same ethical equipment, why are ethical questions so divisive? Gray’s answer is that different people fear differently. “Moral disagreements can still arise even if we all share a harm-based moral mind, because liberals and conservatives disagree about who is especially vulnerable to victimization,” he writes.


Here are some thoughts:

Notably, I am a big fan of Kurt Gray and his research. Search this site for multiple articles.

Our moral psychology is deeply rooted in our evolutionary past, particularly in our sensitivity to harm, which was crucial for survival. This legacy continues to influence modern moral and political debates, often leading to polarized views based on differing perceptions of harm. Kurt Gray’s argument that harm is the "master key" of morality simplifies the complex nature of moral judgments, offering a unifying framework while potentially overlooking the nuanced ways in which cultural and individual differences shape moral reasoning. His critique of moral-foundations theory (M.F.T.) challenges the idea that moral judgments are based on multiple innate modules, suggesting instead that a singular focus on harm underpins our moral (and sometime ethical) decisions. This perspective highlights how moral disagreements, such as those over abortion or immigration, arise from differing assumptions about who is vulnerable to harm.

The idea that moral judgments are often intuitive rather than rational further complicates our understanding of moral decision-making. Gray’s examples, such as incestuous siblings or a vegetarian eating human flesh, illustrate how people instinctively perceive harm even when none is evident. This challenges the notion that moral reasoning is based on logical deliberation, emphasizing instead the role of emotion and intuition. Gray’s emphasis on harm-based storytelling as a tool for bridging moral divides underscores the power of narrative in shaping perceptions. However, it also raises concerns about the potential for manipulation, as seen in the use of exaggerated or false narratives in political rhetoric, such as Donald Trump’s fabricated tales of harm.

Ultimately, the article raises important questions about whether our evolved moral psychology is adequate for addressing the complex challenges of the modern world, such as climate change, nuclear weapons, and artificial intelligence. The mismatch between our ancient instincts and contemporary problems may be a significant source of societal tension. Gray’s work invites reflection on how we can better understand and address the roots of moral conflict, while cautioning against the potential pitfalls of relying too heavily on intuitive judgments and emotional narratives. It suggests that while storytelling can foster empathy and bridge divides, it must be used responsibly to avoid exacerbating polarization and misinformation.

Sunday, February 9, 2025

Does Morality do us any good

Nikhil Kishnan
The New Yorker
Originally published 23 Dec 24

Here is an excerpt:

As things became more unequal, we developed a paradoxical aversion to inequality. In time, patterns began to appear that are still with us. Kinship and hierarchy were replaced or augmented by coöperative relationships that individuals entered into voluntarily—covenants, promises, and the economically essential contracts. The people of Europe, at any rate, became what Joseph Henrich, the Harvard evolutionary biologist and anthropologist, influentially termed “WEIRD”: Western, educated, industrialized, rich, and democratic. WEIRD people tend to believe in moral rules that apply to every human being, and tend to downplay the moral significance of their social communities or personal relations. They are, moreover, much less inclined to conform to social norms that lack a moral valence, or to defer to such social judgments as shame and honor, but much more inclined to be bothered by their own guilty consciences.

That brings us to the past fifty years, decades that inherited the familiar structures of modernity: capitalism, liberal democracy, and the critics of these institutions, who often fault them for failing to deliver on the ideal of human equality. The civil-rights struggles of these decades have had an urgency and an excitement that, Sauer writes, make their supporters think victory will be both quick and lasting. When it is neither, disappointment produces the “identity politics” that is supposed to be the essence of the present cultural moment.

His final chapter, billed as an account of the past five years, connects disparate contemporary phenomena—vigilance about microaggressions and cultural appropriation, policies of no-platforming—as instances of the “punitive psychology” of our early hominin ancestors. Our new sensitivities, along with the twenty-first-century terms they’ve inspired (“mansplaining,” “gaslighting”), guide us as we begin to “scrutinize the symbolic markers of our group membership more and more closely and to penalize any non-compliance.” We may have new targets, Sauer says, but the psychology is an old one.


Here are some thoughts:

Understanding the origins of human morality is relevant for practicing psychologists, as it provides important insights into the psychological foundations of our moral behaviors and professional social interactions. These insight include working with patients and our own ethical code. The article explores how our moral intuitions have evolved over millions of years, revealing that our current moral frameworks are not fixed absolutes, but dynamic systems shaped by biological and social processes. Other scholars have conceptualized morality in similar ways, such as Haidt, DeWaal, and Tomasello.

Hanno Sauer's work illuminates a similar journey of moral development, tracing how early human survival strategies of cooperation and altruism gradually transformed into complex ethical systems. Psychologists can gain insights from this evolutionary perspective, understanding that our moral convictions are deeply rooted in our species' adaptive mechanisms rather than being purely rational constructs.

The article highlights several key insights:
  • Moral beliefs are significantly influenced by social context and evolutionary history
  • Our moral intuitions often precede rational justification
  • Cooperation and punishment played crucial roles in shaping human moral psychology
  • Universal moral values exist across different cultures, despite apparent differences
Particularly compelling is the exploration of how our "punitive psychology" emerged as a mechanism for social regulation, demonstrating how psychological processes have been instrumental in creating societal norms. For practicing psychologists, this understanding can provide a more nuanced approach to understanding patient behaviors, moral reasoning, and the complex interplay between individual experiences and broader evolutionary patterns. Notably, morality is always contextual, as I have pointed out in other summaries.

Finally, the article offers an optimistic perspective on moral progress, suggesting that our fundamental values are more aligned than we might initially perceive. This insight can be helpful for psychologists working with individuals from diverse backgrounds, emphasizing our shared psychological and evolutionary heritage.

Monday, February 3, 2025

Biology is not ethics: A response to Jerry Coyne's anti-trans essay

Aaron Rabinowitz
Friendly Atheist
Originally posted 2 JAN 25

The Freedom From Religion Foundation recently faced criticism for posting and then removing an editorial by Jerry Coyne entitled “Biology is Not Bigotry,” which he wrote in response to an FFRF article by Kat Grant entitled “What is a Woman?” In his piece, Coyne used specious reasoning and flawed research to argue that transgender individuals are more likely to be sexual predators than cisgender individuals and that they should therefore be barred from some jobs and female-only spaces.

As an ethicist I’m not here to argue biology. I don’t know what the right approach is to balancing phenotypic and genotypic accounts of sex. Luckily, despite Coyne’s framing of the controversy, Coyne is also not here to argue biology. He’s here to argue ethics, and his ethics regarding trans issues consist of bigoted claims leading to discriminatory conclusions.

By making ethics claims like “transgender women… should not serve as rape counselors and workers in battered women’s shelters,” while pretending to only be arguing about biological definitions, Coyne effectively conflates biology with ethics. By conflating biology and ethics, Coyne seeks to transfer perceptions of his expertise from one to the other, so that his claims in both domains are treated with deference, rather than challenged as ill-formed and harmful. Biology is not bigotry, but conflating biology with ethics is one of the most common ways to end up doing a bigotry. Historically, that’s how you slide from genetics to genocide.


Here are some thoughts:

In this essay, Rabinowitz critiques Coyne's conflation of biological arguments with ethical judgments concerning transgender individuals. Rabinowitz contends that Coyne's assertions—such as barring transgender women from roles like rape counselors or access to female-only spaces—are ethically unsound and stem from misinterpreted data. He emphasizes that ethical decisions should not be solely based on biological considerations and warns against using flawed research to justify discriminatory practices.

Rabinowitz highlights that Coyne's approach exemplifies how misapplying biological concepts to ethical discussions can lead to bigotry and discrimination. He argues that such reasoning has historically been used to marginalize groups by labeling them as morally deficient based on misinterpreted or selective data. Rabinowitz calls for a clear distinction between biological facts and ethical values, advocating for inclusive and non-discriminatory practices that respect human rights.

This critique underscores the importance of separating scientific observations from ethical prescriptions, cautioning against the misuse of biology to justify exclusionary or harmful policies toward marginalized communities.

Monday, January 6, 2025

Moral agency under oppression

Hirji, S. (2024).
Philosophy and Phenomenological Research.

Abstract

In Huckleberry Finn, a thirteen-year old white boy in antebellum Missouri escapes from his abusive father and befriends a runaway slave named Jim. On a familiar reading of the novel, both Huck and Jim are, in their own ways, morally impressive, transcending the unjust circumstances in which they find themselves in to treat each other as equals. Huck saves Jim's life from two men looking for runaway slaves, and later Jim risks his chance at freedom to save Huck's friend Tom. I want to complicate the idea that Huck and Jim are morally commendable for what they do. More generally, I want to explore how oppression undermines the moral agency of the oppressed, and to some degree, the oppressor. In §1 I take a careful look at Jim's choice, arguing that his enslavement compromises his moral agency. In §2 I show how Jim's oppression also shapes the extent to which Huck can be praiseworthy for his action. In §3, I consider the consequences for thinking about the moral agency of the oppressed, and in §4 I explore the limitations of the concept of moral worth for theorizing in cases of oppression.

The article is here.

Here are some thoughts: 

This article examines moral agency within the context of oppression, using Mark Twain's Huckleberry Finn as a case study. The author challenges the conventional interpretation of Huck and Jim's actions as morally commendable, arguing that Jim's enslavement fundamentally restricts his agency, regardless of his choices. This limitation, the author contends, also impacts the assessment of Huck's actions, suggesting his seemingly virtuous choices are inadvertently shaped by the system of oppression. The article further explores how established moral philosophical concepts inadequately address the complexities of moral agency under oppression, proposing a nuanced understanding that considers both capacity and the ability to fully express that capacity in action. Finally, the article broadens its scope to consider contemporary instances of oppression, demonstrating the persistent challenges to moral agency in various social contexts.

Tuesday, December 24, 2024

Education is Effective in Improving Students’ Ethical and Moral Outcomes: A Systematic Review and Meta-Analysis

Basarkod, G., Cahill, L., et al. (2024, November 20).

Abstract

Addressing society's greatest challenges, such as climate change, requires us to act as moral agents. If effective, interventions within schools and universities could cultivate ethical and moral attributes in millions of people. In this pre-registered systematic review and meta-analysis, we synthesized evidence from 66 randomized controlled trials of interventions at primary, secondary, and tertiary education levels (k=246; 9,978 students). Educational interventions effectively improved students’ moral outcomes of sensitivity, judgment, motivation, and character compared to control groups (g = 0.54; n = 45; k = 133). Interventions involving student discussions were more effective than those relying solely on unidirectional or passive transfer of information. This finding was confirmed through studies comparing two alternate ethics interventions (n = 38; k = 113). Overall, our review shows that educational interventions can improve students’ ethical and moral attributes and provides insights for prioritizing and planning future interventions to increase these attributes at scale.

Here are some thoughts:

This pre-print manuscript details a meta-analysis of 66 randomized controlled trials investigating the effectiveness of ethics interventions in educational settings. The study, conducted across various educational levels and disciplines, found that interventions incorporating student discussions significantly improved students' moral outcomes compared to control groups or interventions solely using didactic methods. The analysis also explored moderators such as education level, intervention style, and risk of bias, revealing nuanced insights into the effectiveness of different approaches to ethics education. Importantly, the researchers emphasized the need for further research to improve study design and broaden geographical representation.

Wednesday, December 18, 2024

Artificial Intelligence, Existential Risk and Equity: The Need for Multigenerational Bioethics

Law, K. F., Syropoulos, S., & Earp, B. D. (2024).
Journal of Medical Ethics, in press.

“Future people count. There could be a lot of them. We can make their lives better.”
––William MacAskill, What We Owe The Future

“[Longtermism is] quite possibly the most dangerous secular belief system in the world today.”
––Émile P. Torres, Against Longtermism

Philosophers, psychologists, politicians, and even some tech billionaires have sounded the alarm about artificial intelligence (AI) and the dangers it may pose to the long-term future of humanity. Some believe it poses an existential risk (X-Risk) to our species, potentially causing our extinction or bringing about the collapse of human civilization as we know it.

The above quote from philosopher Will MacAskill captures the key tenets of “longtermism,” an ethical standpoint that places the onus on current generations to prevent AI-related—and other—X-Risks for the sake of people living in the future. Developing from an adjacent social movement commonly associated with utilitarian philosophy, “effective altruism,” longtermism has amassed following of its own. Its supporters argue that preventing X-Risks is at least as morally significant as addressing current challenges like global poverty.

However, critics are concerned that such a distant-future focus will sideline efforts to tackle the many pressing moral issues facing humanity now. Indeed, according to “strong” longtermism, future needs arguably should take precedence over present ones. In essence, the claim is that there is greater expected utility to allocating available resources to prevent human extinction in the future than there is to focusing on present lives, since doing so stands to benefit the incalculably large number of people in later generations who will far outweigh existing populations. Taken to the extreme, this view suggests it would be morally permissible, or even required, to actively neglect, harm, or destroy large swathes of humanity as it exists today if this would benefit or enable the existence of a sufficiently large number of future—that is, hypothetical or potential—people, a conclusion that strikes many critics as dangerous and absurd.


Here are some thoughts: 

This article explores the ethical implications of artificial intelligence (AI), particularly focusing on the concept of longtermism. Longtermism argues for prioritizing the well-being of future generations, potentially even at the expense of present-day needs, to prevent existential risks (X-Risks) such as the collapse of human civilization. The paper examines the arguments for and against longtermism, discussing the potential harms of prioritizing future populations over current ones and highlighting the importance of addressing present-day social justice issues. The authors propose a multigenerational bioethics approach, advocating for a balanced perspective that considers both future risks and present needs while incorporating diverse ethical frameworks. Ultimately, the article argues that the future of AI development should be guided by an inclusive and equitable framework that prioritizes the welfare of both present and future generations.

Wednesday, December 4, 2024

AI-Powered 'Death Clock' Promises Better Prediction Of When You May Die

Decisions such as how much to save and how fast to withdraw assets are often based on broad-brush and unreliable averages for life expectancy.

Alex Tanzi
Bloomberg.com
Originally posted 1 DEC 24

For centuries, humans have used actuarial tables to figure out how long they're likely to live. Now artificial intelligence is taking up the task - and its answers may well be of interest to economists and money managers.

The recently released Death Clock, an AI-powered longevity app, has proved a hit with paying customers - downloaded some 125,000 times since its launch in July, according to market intelligence firm Sensor Tower.

The AI was trained on a dataset of more than 1,200 life expectancy studies with some 53 million participants. It uses information about diet, exercise, stress levels and sleep to predict a likely date of death. The results are a "pretty significant" improvement on the standard life-table expectations, says its developer, Brent Franson.

Despite its somewhat morbid tone - it displays a "fond farewell" death-day card featuring the Grim Reaper - Death Clock is catching on among people trying to live more healthily. It ranks high in the Health and Fitness category of apps. But the technology potentially has a wider range of uses.


Here are some thoughts:

The "Death Clock" app raises numerous moral, ethical, and psychological considerations that warrant careful evaluation. From a psychological perspective, the app has the potential to promote health awareness by encouraging users to adopt healthier lifestyles and providing personalized insights into their life expectancy. This tailored approach can motivate individuals to make meaningful life changes, such as prioritizing relationships or pursuing long-delayed goals. However, the constant awareness of a "countdown to death" could heighten anxiety, depression, or obsessive tendencies, particularly among users predisposed to these conditions. Additionally, over-reliance on the app's predictions might lead to misguided life decisions if users view the estimates as absolute truths, potentially undermining their overall well-being. Privacy concerns also emerge, as sensitive health data shared with the app could be misused or exploited.

From an ethical standpoint, the app empowers individuals by providing access to advanced predictive tools that were previously available only through professional services. It could aid in financial and medical planning, helping users better allocate resources for retirement or healthcare. Nonetheless, there are ethical concerns about the app's marketing, which may exploit individuals' fear of death for profit. The annual subscription fee of $40 could further exacerbate health and longevity inequities by excluding lower-income users. Moreover, the handling and storage of health-related data pose significant risks, as misuse could lead to discrimination, such as insurance companies denying coverage based on longevity predictions.

Morally, the app offers opportunities for reflection and informed decision-making, allowing users to better appreciate the finite nature of life and prioritize meaningful actions. However, it also risks dehumanizing the deeply personal and subjective experience of mortality by reducing it to a numerical estimate. This reductionist view may encourage fatalism, discouraging users from striving for improvement or maintaining hope. Inaccurate predictions could lead to unnecessary financial or emotional strain, further complicating the moral implications of such a tool.

PS- The death clock indicates my date of death is October 3, 2050 (if we still have a viable planet and AI has not deemed me obsolete).