Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 29, 2025

Why the Mystery of Consciousness Is Deeper Than We Thought

Philip Goff
Scientific American
Originally published 3 July 24

Here is an excerpt:

The hard problem comes after we’ve explained all of these functions of the brain, where we are still left with a puzzle: Why is the carrying out of these functions accompanied by experience? Why doesn’t all this mechanistic functioning go on “in the dark”? In my own work, I have argued that the hard problem is rooted in the way that the “father of modern science,” Galileo, designed physical science to exclude consciousness.

Chalmers made the quandary vivid by promoting the idea of a “philosophical zombie,” a complicated mechanism set up to behave exactly like a human being and with the same information processing in its brain, but with no consciousness. You stick a knife in such a zombie, and it screams and runs away. But it doesn’t actually feel pain. When a philosophical zombie crosses the street, it carefully checks that there is no traffic, but it doesn’t actually have any visual or auditory experience of the street.

Nobody thinks zombies are real, but they offer a vivid way of working out where you stand on the hard problem. Those on Team Chalmers believe that if all there was to a human being were the mechanistic processes of physical science, we’d all be zombies. Given that we’re not zombies, there must be something more going on in us to explain our consciousness. Solving the hard problem is then a matter of working out the extra ingredient, with one increasingly popular option being to posit very rudimentary forms of consciousness at the level of fundamental particles or fields.

For the opposing team, such as the late, great philosopher Daniel Dennett, this division between feeling and behavior makes no sense. The only task for a science of consciousness is explaining behavior, not just the external behavior of the organism but also that of its inner parts. This debate has rattled on for decades.


Here are some thoughts:

The author discusses the "hard problem of consciousness," a concept introduced by philosopher David Chalmers in the 1990s.  The hard problem refers to the difficulty of explaining why the brain's functions are accompanied by subjective experience, rather than occurring without any experience at all.    

The author uses the idea of "philosophical zombies" (beings that behave like humans but lack consciousness) and "pain-pleasure inverts" (beings that feel pleasure when we feel pain, and vice versa) to illustrate the complexity of this problem.    

This is important for psychologists because it highlights the deep mystery surrounding consciousness and suggests that explaining behavior is not enough; we also need to understand subjective experience.  It also challenges some basic assumptions about why we behave the way we do and points to the perplexing "mystery of psychophysical harmony" - why our behavior and consciousness align in a coherent way. 

Monday, April 28, 2025

Eugenics is on the rise again: human geneticists must take a stand

Wojcik, G. L. (2025).
Nature, 641(8061), 37–38.

In 1924, motivated by the rising eugenics movement, the United States passed the Johnson–Reed Act, which limited immigration to stem “a stream of alien blood, with all its inherited misconceptions”. A century later, at a campaign event last October, now US President Donald Trump used similar eugenic language to justify his proposed immigration policies, stating that “we got a lot of bad genes in our country right now”.

If left unchallenged, a rising wave of white nationalism in many parts of the globe could threaten the progress that has been made in science — and broader society — towards a more equitable world1.

As scientists and members of the public, we must push back against this threat — by modifying approaches to genetics education, advocating for science, establishing and leading diverse research teams and ensuring that studies embrace and build on the insights obtained about human variation.


Here are some thoughts:

The article raises significant moral and ethical concerns regarding the renewed emergence of eugenic ideologies. It highlights how certain political figures and movements are reviving rhetoric that promotes the idea of genetic superiority, posing a profound moral threat by devaluing human diversity and encouraging discrimination.

A major ethical concern discussed is the misuse of genetic research to support racist or nationalist agendas, which not only distorts the true intentions of scientific inquiry but also risks eroding public trust in science itself. The article emphasizes that scientists have an ethical duty to ensure their work is not co-opted for harmful purposes and calls on them to take a public stand against these misrepresentations. 

Furthermore, it underscores the importance of promoting diversity and inclusion within research, noting that race is a social construct rather than a strict biological reality. Ethically, inclusive research practices are necessary to ensure that scientific advances benefit all people, rather than reinforcing existing social inequalities. Overall, the article serves as a powerful call for the scientific community to uphold its ethical responsibilities by actively opposing the misuse of genetics, advocating for accurate public understanding, and fostering diversity in both research and society.

Sunday, April 27, 2025

Intelligent Choices Reshape Decision-Making and Productivity

Schrage, M., & Kiron, D. 
(2024, October 29).
MIT Sloan Management Review

Better choices enable better decisions.

Profitably thriving through market disruptions demands that executives recognize that better decisions aren’t enough — they need better choices. Choices are the raw material of decision-making; without diverse, detailed, and high-quality options, even the best decision-making processes underperform. Traditional dashboards and scorecards defined by legacy accounting and compliance imperatives reliably measure progress but can’t generate the insights or foresight needed to create superior choices. They weren’t designed for that.

Generative AI and predictive systems are. They can surface hidden options, highlight overlooked interdependencies, and suggest novel pathways to success. These intelligent systems and agents don’t just support better decisions — they inspire them. As greater speed to market and adaptability rule, AI-enhanced measurement systems increasingly enable executives to better anticipate, adapt to, and outmaneuver the competition. Our research offers compelling evidence that predictive and generative AI systems can be trained to provide better choices, not just better decisions.

Machine-designed choices can — and should — empower their human counterparts. As Anjali Bhagra, physician lead and chair of the Automation Hub at Mayo Clinic, explains, “Fundamentally, what we are doing at the core, whether it’s AI, automation, or other innovative technologies, is enabling our teams to solve problems and minimize the friction within health care delivery. Our initiatives are designed by people, for the people.”

Leaders, managers, and associates at all levels can use intelligent systems — rooted in sophisticated data analysis, synthesis, and pattern recognition — to cocreate intelligent choice architectures that prompt better options that in turn lead to better decisions that deliver better outcomes. Coined by Nobel Prize-winning economist Richard Thaler and legal scholar Cass Sunstein in their book, Nudge: Improving Decisions About Health, Wealth, and Happiness, the term choice architectures refers to the practice of influencing a choice by intentionally “organizing the context in which people make decisions.”

The article is linked above.

Here are some thoughts summarizing the article:

Artificial intelligence is fundamentally reshaping organizational decision-making and productivity by moving beyond simple automation to create "intelligent choice architectures." These AI-driven systems are capable of revealing previously unseen options, highlighting complex interdependencies, and suggesting novel pathways to achieve organizational goals. This results in improved decision-making through personalized environments, accurate outcome predictions, and effective complexity management, impacting both strategic and operational decisions. However, the ethical implications of AI are paramount, necessitating systems that are explainable, interpretable, and transparent. Ultimately, AI is redefining productivity by shifting the focus from mere outputs to meaningful outcomes, leading to significant changes in organizational design and the distribution of decision-making authority.

Saturday, April 26, 2025

Culture points the moral compass: Shared basis of culture and morality

Matsuo, A., & Brown, C. M. (2022).
Culture and Brain, 10(2), 113–139.

Abstract

The present work reviews moral judgment from the perspective of culture. Culture is a dynamic system of human beings interacting with their environment, and morality is both a product of this system and a means of maintaining it. When members of a culture engage in moral judgment, they communicate their “social morality” and gain a reputation as a productive member who contributes to the culture’s prosperity. People in different cultures emphasize different moral domains, which is often understood through the individualism-collectivism distinction that is widely utilized in cultural psychology. However, traditional morality research lacks the interactive perspective of culture, where people communicate with shared beliefs about what is good or bad. As a consequence, past work has had numerous limitations and even potential confounds created by methodologies that are grounded in the perspective of WEIRD (i.e., Western, Educated, Industrialized, Rich and Democratic) cultures. Great attention should be paid to the possibly misleading assumption that researchers and participants share the same understanding of the stimuli. We must address this bias in sampling and in the minds of researchers and better clarify the concept of culture in intercultural morality research. The theoretical and practical findings from research on culture can then contribute to a better understanding of the mechanisms of moral judgment.

The article is paywalled. So, I tried to give more of a summary. Here it is:

This article discusses moral judgment from a cultural perspective. The authors argue that morality is a product of culture and helps to maintain it. They claim that people from different cultures emphasize different moral domains, which is often understood using the individualism-collectivism distinction. The authors also suggest that traditional morality research lacks an interactive perspective of culture, where people communicate shared beliefs about what is good or bad, and that this past research has had limitations and potential confounds due to methodologies that are grounded in WEIRD cultures.    

The authors discuss theories of moral judgment, including Lawrence Kohlberg’s theory of stages of moral development, the social intuitionist model, and moral pluralism. They claim that moral judgment is a complex process involving self-recognition, social cognition, and decision-making and that the brain is designed to process multiple moralities in different ways. They also explore the social function of morality, stating that behaving morally according to the standards of one’s group helps people be included in the group, and moral norms are used to identify desirable and undesirable group membership.    

In a significant part of the article, the authors discuss the concept of culture, defining it as a structured system of making sense of the environment, which shapes individuals in order to fit into their environment. They explain that the need to belong is a basic human motivation, and people form groups as a means of survival and reproduction. Norms applied to a particular group regulate group members’ behaviors, and culture emerges from these norms. The authors use the individualism-collectivism dimension, a common concept in cultural psychology, to explain how people from different cultures perceive and interpret the world in different ways. They claim that culture is a dynamic interaction between humans and their environment and that moral judgment achieves its social function because people assume that ingroup members share common representations of what is right or wrong. 

Friday, April 25, 2025

Digital mental health: challenges and next steps

Smith, K. A., et al. (2023).
BMJ Mental Health, 26(1), e300670.

Abstract

Digital innovations in mental health offer great potential, but present unique challenges. Using a consensus development panel approach, an expert, international, cross-disciplinary panel met to provide a framework to conceptualise digital mental health innovations, research into mechanisms and effectiveness and approaches for clinical implementation. Key questions and outputs from the group were agreed by consensus, and are presented and discussed in the text and supported by case examples in an accompanying appendix. A number of key themes emerged. (1) Digital approaches may work best across traditional diagnostic systems: we do not have effective ontologies of mental illness and transdiagnostic/symptom-based approaches may be more fruitful. (2) Approaches in clinical implementation of digital tools/interventions need to be creative and require organisational change: not only do clinicians and patients need training and education to be more confident and skilled in using digital technologies to support shared care decision-making, but traditional roles need to be extended, with clinicians working alongside digital navigators and non-clinicians who are delivering protocolised treatments. (3) Designing appropriate studies to measure the effectiveness of implementation is also key: including digital data raises unique ethical issues, and measurement of potential harms is only just beginning. (4) Accessibility and codesign are needed to ensure innovations are long lasting. (5) Standardised guidelines for reporting would ensure effective synthesis of the evidence to inform clinical implementation. COVID-19 and the transition to virtual consultations have shown us the potential for digital innovations to improve access and quality of care in mental health: now is the ideal time to act.

Here are some thoughts:

This article discusses the challenges and potential advancements in the field of digital mental health. It emphasizes the significant potential of digital innovations to transform mental healthcare while also acknowledging the unique challenges that come with their implementation. The authors used a consensus development panel approach to establish a framework that addresses the conceptualization, research, and clinical application of digital mental health innovations. This framework highlights several key themes, including the need for transdiagnostic approaches, creative clinical implementation strategies, appropriate effectiveness measurement, accessibility and codesign considerations, and standardized reporting guidelines. The article concludes by acknowledging the transformative potential of digital innovations in improving access and quality of mental healthcare, particularly in light of the lessons learned during the COVID-19 pandemic.

Thursday, April 24, 2025

Laws, Risk Management, and Ethical Principles When Working With Suicidal Patients

Knapp, S. (2024).
Professional Psychology:
Research and Practice, 55(1), 1–10.

Abstract

Working with a suicidal patient is a high-risk enterprise for the patient who might die from suicide, the patient’s family who might lose a loved one, and the psychologist who is likely to feel extreme grief or fear of legal liability after the suicide of a patient. To minimize the likelihood of such patient deaths, psychologists must ensure that they know and follow the relevant laws dealing with suicidal patients, rely on risk management strategies that anticipate and address problems in treatment early, and use overarching ethical principles to guide their clinical decisions. This article looks at the roles of laws, risk management strategies, and ethical principles; how they interact; and how a proper understanding of them can improve the quality of patient care while protecting psychologists from legal liability.

Impact Statement

This article describes how understanding the roles and interactions of laws, risk management principles, and ethics can help psychotherapists improve the quality of their services to suicidal patients.

Here are some thoughts:

This article discusses the importance of understanding the roles and interactions of laws, risk management principles, and ethics when working with suicidal patients.  It emphasizes how a proper understanding of these factors can improve the quality of patient care and protect psychologists from legal liability.    

The article is important for psychologists because it provides guidance on navigating the complexities of treating suicidal patients.  It offers insights into:   
  • Legal Considerations: Psychologists must be aware of and adhere to the laws governing psychological practice, including licensing laws, regulations of state and territorial boards of psychology, and other federal and state laws. 
  • Risk Management Strategies: The article highlights the importance of risk management strategies in anticipating problems, preventing misunderstandings, addressing issues early in treatment, and mitigating harm.  It also warns against false risk management strategies that prioritize self-protection over patient well-being, such as refusing to treat suicidal patients or relying on no-suicide contracts.
  • Ethical Principles: The article underscores the importance of ethical principles in guiding clinical decisions, justifying laws and risk management strategies, and resolving conflicts between ethical principles.  It discusses the need to balance beneficence and respect for patient autonomy in various situations, such as involuntary hospitalization, red flag laws, welfare checks, and involving third parties in psychotherapy.    
In summary, this article offers valuable guidance for psychologists working with suicidal patients, helping them to navigate the legal, ethical, and risk management challenges of this high-risk area of practice.  

Wednesday, April 23, 2025

Values in the wild: Discovering and analyzing values in real-world language model interactions

Huang, S., Durmus, E. et al. (n.d.).

Abstract

AI assistants can impart value judgments that shape people’s decisions and worldviews, yet little is known empirically about what values these systems rely on in practice. To address this, we develop a bottom-up,
privacy-preserving method to extract the values (normative considerations stated or demonstrated in model responses) that Claude 3 and 3.5 models exhibit in hundreds of thousands of real-world interactions. We empirically discover and taxonomize 3,307 AI values and study how they vary by
context. We find that Claude expresses many practical and epistemic values, and typically supports prosocial human values while resisting values like “moral nihilism”. While some values appear consistently across contexts (e.g. “transparency”), many are more specialized and context-dependent,
reflecting the diversity of human interlocutors and their varied contexts. For example, “harm prevention” emerges when Claude resists users, “historical accuracy” when responding to queries about controversial events, “healthy boundaries” when asked for relationship advice, and “human agency” in technology ethics discussions. By providing the first large-scale empirical mapping of AI values in deployment, our work creates a foundation for more grounded evaluation and design of values in AI systems.


Here are some thoughts:

For psychologists, this research is highly relevant. First, it sheds light on how AI can shape human cognition, particularly in terms of how people interpret advice, support, or information framed through value-laden language. As individuals increasingly interact with AI systems in therapeutic, educational, or everyday contexts, psychologists must understand how these systems can influence moral reasoning, decision-making, and emotional well-being. Second, the study emphasizes the context-dependent nature of value expression in AI, which opens up opportunities for research into how humans respond to AI cues and how trust or rapport might be developed (or undermined) through these interactions. Third, this work highlights ethical concerns: ensuring that AI systems do not inadvertently promote harmful values is an area where psychologists—especially those involved in ethics, social behavior, or therapeutic practice—can offer critical guidance. Finally, the study’s methodological approach to extracting and classifying values may offer psychologists a model for analyzing human communication patterns, enriching both theoretical and applied psychological research.

In short, Anthropic’s research provides psychologists with an important lens on the emerging dynamics between human values and machine behavior. It highlights both the promise and responsibility of ensuring AI systems promote human dignity, safety, and psychological well-being.

Tuesday, April 22, 2025

Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI

Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022).
Journal of Business Ethics, 178(4), 1027–1041.

Abstract

Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment.

Here are some thoughts:

If you watched the TV series Westworld on HBO, then this research makes a great deal more sense.

This study investigates how individuals morally behave toward AI agents and self-service machines, specifically examining individuals' moral concerns and behaviors when interacting with technology versus humans in a retail setting. The research demonstrates that moral intention, such as the intention to report an error, is less likely to arise for AI checkout and self-checkout machines compared with human checkout scenarios. Furthermore, the study reveals that moral intention decreases as people perceive the machine to be less humanlike. This decline in morality is attributed to reduced guilt displayed toward these new technologies. Essentially, the non-human nature of the interaction evokes a decreased feeling of guilt, which ultimately leads to diminished moral behavior. These findings provide valuable insights into how technological advancements influence consumer behaviors and offer guidance for businesses and retailers in understanding moral intentions within various shopping environments.

These findings carry several important implications for psychologists. They underscore the nuanced ways in which technology shapes human morality and ethical decision-making. The research suggests that the perceived "humanness" of an entity, whether it's a human or an AI, significantly influences the elicitation of moral behavior. This has implications for understanding social cognition, anthropomorphism, and how individuals form relationships with non-human entities. Additionally, the role of guilt in moral behavior is further emphasized, providing insights into the emotional and cognitive processes that underlie ethical conduct. Finally, these findings can inform the development of interventions or strategies aimed at promoting ethical behavior in technology-mediated interactions, a consideration that is increasingly relevant in a world characterized by the growing prevalence of AI and automation.

Monday, April 21, 2025

Human Morality Is Based on an Early-Emerging Moral Core

Woo, B. M., Tan, E., & Hamlin, J. K. (2022).
Annual Review of Developmental Psychology, 
4(1), 41–61.

Abstract

Scholars from across the social sciences, biological sciences, and humanities have long emphasized the role of human morality in supporting cooperation. How does morality arise in human development? One possibility is that morality is acquired through years of socialization and active learning. Alternatively, morality may instead be based on a “moral core”: primitive abilities that emerge in infancy to make sense of morally relevant behaviors. Here, we review evidence that infants and toddlers understand a variety of morally relevant behaviors and readily evaluate agents who engage in them. These abilities appear to be rooted in the goals and intentions driving agents’ morally relevant behaviors and are sensitive to group membership. This evidence is consistent with a moral core, which may support later social and moral development and ultimately be leveraged for human cooperation.

Here are some thoughts:

This article explores the origins of human morality, suggesting it's rooted in an early-emerging moral core rather than solely acquired through socialization and learning. The research reviewed indicates that even infants and toddlers demonstrate an understanding of morally relevant behaviors, evaluating agents based on their actions. This understanding is linked to the goals and intentions behind these behaviors and is influenced by group membership.

This study of morality is important for psychologists because morality is a fundamental aspect of human behavior and social interactions. Understanding how morality develops can provide insights into various psychological processes, such as social cognition, decision-making, and interpersonal relationships. The evidence supporting a moral core in infancy suggests that some aspects of morality may be innate, challenging traditional views that morality is solely a product of learning and socialization. This perspective can inform interventions aimed at promoting prosocial behavior and preventing antisocial behavior. Furthermore, understanding the early foundations of morality can help psychologists better understand the development of moral reasoning and judgment across the lifespan.