Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Decision-making. Show all posts
Showing posts with label Decision-making. Show all posts

Monday, September 8, 2025

Cognitive computational model reveals repetition bias in a sequential decision-making task

Legler, E., Rivera, D. C.,  et al. (2025).
Communications Psychology, 3(1).


Abstract

Humans tend to repeat action sequences that have led to reward. Recent computational models, based on a long-standing psychological theory, suggest that action selection can also be biased by how often an action or sequence of actions was repeated before, independent of rewards. However, empirical support for such a repetition bias effect in value-based decision-making remains limited. In this study, we provide evidence of a repetition bias for action sequences using a sequential decision-making task (N = 70). Through computational modeling of choices, we demonstrate both the learning and influence of a repetition bias on human value-based decisions. Using model comparison, we find that decisions are best explained by the combined influence of goal-directed reward seeking and a tendency to repeat action sequences. Additionally, we observe significant individual differences in the strength of this repetition bias. These findings lay the groundwork for further research on the interaction between goal-directed reward seeking and the repetition of action sequences in human decision making.

Here are some thoughts:

This research on "repetition bias in a sequential decision-making task" offers valuable insights for psychologists, impacting both their own professional conduct and their understanding of patient behaviors. The study highlights that human decision-making is not solely driven by the pursuit of rewards, but also by an unconscious tendency to repeat previous action sequences. This finding suggests that psychologists, like all individuals, may be influenced by these ingrained patterns in their own practices, potentially leading to a reliance on familiar methods even when alternative, more effective approaches might exist. An awareness of this bias can foster greater self-reflection, encouraging psychologists to critically evaluate their established routines and adapt their strategies to better serve patient needs.

Furthermore, this research provides a crucial framework for understanding repetitive behaviors in patients. By demonstrating the coexistence of repetition bias with goal-directed reward seeking, the study helps explain why individuals might persist in actions that are not directly rewarding or may even be detrimental, a phenomenon often observed in conditions like obsessive-compulsive disorder or addiction. This distinction between the drivers of behavior can aid psychologists in more accurate patient assessment, allowing them to discern whether a patient's repetitive actions stem from a strong, non-reward-driven bias or from deliberate, goal-oriented choices. The research also notes significant individual differences in the strength of this bias, implying the need for personalized treatment approaches. Moreover, the study's suggestion that frequent repetition contributes to habit formation by diminishing goal-directed control offers insights into how maladaptive habits develop and how interventions can be designed to disrupt these cycles or bolster conscious control.

Wednesday, August 20, 2025

Doubling-Back Aversion: A Reluctance to Make Progress by Undoing It

Cho, K. Y., & Critcher, C. R. (2025).
Psychological Science, 36(5), 332-349.

Abstract

Four studies (N = 2,524 U.S.-based adults recruited from the University of California, Berkeley, or Amazon Mechanical Turk) provide support for doubling-back aversion, a reluctance to pursue more efficient means to a goal when they entail undoing progress already made. These effects emerged in diverse contexts, both as participants physically navigated a virtual-reality world and as they completed different performance tasks. Doubling back was decomposed into two components: the deletion of progress already made and the addition to the proportion of a task that was left to complete. Each contributed independently to doubling-back aversion. These effects were robustly explained by shifts in subjective construals of both one’s past and future efforts that would result from doubling back, not by changes in perceptions of the relative length of different routes to an end state. Participants’ aversion to feeling their past efforts were a waste encouraged them to pursue less efficient means. We end by discussing how doubling-back aversion is distinct from established phenomena (e.g., the sunk-cost fallacy).

Here are some thoughts:

This research is important to psychologists because it identifies a new bias—doubling-back aversion, the tendency to avoid more efficient strategies if they require undoing prior progress. Unlike the sunk cost fallacy, which involves continuing with a failing course of action to justify prior investments, doubling-back aversion leads people to reject better options simply because they involve retracing steps—even when the original path is not failing. It expands understanding of goal pursuit by showing that subjective interpretations of effort, progress, and perceived waste, not just past investment, drive decisions. These findings have important implications for behavior change, therapy, education, and challenge rational-choice models by revealing emotional barriers to optimal decisions.

Here is a clinical example:

A client has spent months working on developing assertiveness skills and boundary-setting to improve their interpersonal relationships. While these skills have helped somewhat, the client still experiences frequent emotional outbursts, difficulty calming down, and lingering shame after conflicts. The therapist recognizes that the core issue may be the client’s inability to regulate intense emotions in the moment and suggests shifting the focus to foundational emotion-regulation strategies.

The client hesitates and says:

“We already moved past that—I thought I was done with that kind of work. Going back feels like I'm not making progress.”

Doubling-Back Aversion in Action:
  • The client resists returning to earlier-stage work (emotion regulation) even though it’s crucial for addressing persistent symptoms.
  • They perceive it as undoing progress, not as a step forward.
  • This aversion delays therapeutic gains, even though the new focus is likely more effective.

Wednesday, August 6, 2025

Executives Who Used Gen AI Made Worse Predictions

Parra-Moyano, J.,  et al. (2025, July 1).
Harvard Business Review. 

Summary. 

In a recent experiment, nearly 300 executives and managers were shown recent stock prices for the chip-maker Nvidia and then asked to predict the stock’s price in a month’s time. Then, half the group was given the opportunity to ask questions of ChatGPT while the other half were allowed to consult with their peers about Nvidia’s stock. The executives who used ChatGPT became significantly more optimistic, confident, and produced worse forecasts than the group who discussed with their peers. This is likely because the authoritative voice of the AI—and the level of detail of it gave in it’s answer—produced a strong sense of assurance, unchecked by the social regulation, emotional responsiveness, and useful skepticism that caused the peer-discussion group to become more conservative in their predictions. In order to harness the benefits of AI, executives need to understand the ways it can bias their own critical thinking.

Here are some thoughts:

The key finding was counterintuitive: while AI tools have shown benefits for routine tasks and communication, they actually hindered performance when executives relied on them for complex predictions and forecasting. The study suggests this occurred because the AI's authoritative tone and detailed responses created false confidence, leading to overoptimistic assessments that were less accurate than traditional peer consultation.

For psychologists, the study highlights how AI can amplify existing cognitive biases, particularly overconfidence bias. The authoritative presentation of AI responses appears to bypass critical thinking processes, making users more certain of predictions that are actually less accurate. This demonstrates the psychology of human-AI interaction and how perceived authority can override analytical judgment.

For psychologists working in organizational settings, this research provides important insights about how AI adoption affects executive decision-making and team dynamics. It suggests that the perceived benefits of AI assistance may sometimes mask decreased decision quality.

Tuesday, July 29, 2025

Moral learning and Decision-Making across the lifespan

Lockwood, P. L., Van Den Bos, W., & Dreher, J. (2024).
Annual Review of Psychology.

Abstract

Moral learning and decision-making are crucial throughout our lives, from infancy to old age. Emerging evidence suggests that there are important differences in learning and decision-making in moral situations across the lifespan, and these are underpinned by co-occurring changes in the use of model-based values and theory of mind. Here, we review the decision neuroscience literature on moral choices and moral learning considering four key concepts. We show how in the earliest years, a sense of self/other distinction is foundational. Sensitivity to intention versus outcome is crucial for several moral concepts and is most similar in our earliest and oldest years. Across all ages, basic shifts in the influence of theory of mind and model-free and model-based learning support moral decision-making. Moving forward, a computational approach to key concepts of morality can help provide a mechanistic account and generate new hypotheses to test across the whole lifespan.

Here are some thoughts:

The article highlights that moral learning and decision-making evolve dynamically throughout the lifespan, with distinct patterns emerging at different developmental stages. From early childhood to old age, individuals shift from rule-based moral reasoning toward more complex evaluations that integrate intentions, outcomes, and social context.

Understanding these developmental trajectories is essential for psychologists, as it informs age-appropriate interventions and expectations regarding moral behavior. Neuroscientific findings reveal that key brain regions such as the ventromedial prefrontal cortex (vmPFC), temporoparietal junction (TPJ), and striatum play critical roles in processing empathy, fairness, guilt, and social norms. These insights help explain how neurological impairments or developmental changes can affect moral judgment, particularly useful in clinical and neuropsychological settings.

Social influence also plays a significant role, especially during adolescence, where peer pressure and reputational concerns strongly shape moral decisions. This has practical implications for therapists working with youth, including strategies to build resilience against antisocial influences and promote prosocial behaviors.

The research further explores how deficits in moral learning are linked to antisocial behaviors, psychopathy, and conduct disorders, offering valuable perspectives for forensic psychology and clinical intervention planning.

Lastly, the article emphasizes the importance of cultural sensitivity, noting that moral norms vary across societies and change over time. For practicing psychologists, this underscores the need to adopt culturally informed approaches when assessing and treating clients from diverse backgrounds.

Thursday, July 10, 2025

Neural correlates of the sense of agency in free and coerced moral decision-making among civilians and military personnel

Caspar, E. A., et al. (2025).
Cerebral Cortex, 35(3).

Abstract

The sense of agency, the feeling of being the author of one’s actions and outcomes, is critical for decision-making. While prior research has explored its neural correlates, most studies have focused on neutral tasks, overlooking moral decision-making. In addition, previous studies mainly used convenience samples, ignoring that some social environments may influence how authorship in moral decision-making is processed. This study investigated the neural correlates of sense of agency in civilians and military officer cadets, examining free and coerced choices in both agent and commander roles. Using a functional magnetic resonance imaging paradigm where participants could either freely choose or follow orders to inflict a mild shock on a victim, we assessed sense of agency through temporal binding—a temporal distortion between voluntary and less voluntary decisions. Our findings suggested that sense of agency is reduced when following orders compared to acting freely in both roles. Several brain regions correlated with temporal binding, notably the occipital lobe, superior/middle/inferior frontal gyrus, precuneus, and lateral occipital cortex. Importantly, no differences emerged between military and civilians at corrected thresholds, suggesting that daily environments have minimal influence on the neural basis of moral decision-making, enhancing the generalizability of the findings.


Here are some thoughts:

The study found that when individuals obeyed direct orders to perform a morally questionable act—such as delivering an electric shock—they experienced a significantly diminished sense of agency, or personal responsibility, for that action. This diminished agency was measured using the temporal binding effect, which was weaker under coercion compared to when participants freely chose their actions. Neuroimaging revealed that obedience was associated with reduced activation in brain regions involved in self-referential processing and moral reasoning, such as the frontal gyrus, occipital lobe, and precuneus. Interestingly, this effect was observed equally among civilian participants and military officer cadets, suggesting that professional training in hierarchical settings does not necessarily protect against the psychological distancing that comes with obeying authority.

These findings are significant because they offer neuroscientific support for classic social psychology theories—like those stemming from Milgram’s obedience experiments—that suggest authority can reduce individual accountability. By identifying the neural mechanisms underlying diminished moral responsibility under orders, the study raises important ethical questions about how institutional hierarchies might inadvertently suppress personal agency. This has real-world implications for contexts such as the military, law enforcement, and corporate structures, where individuals may feel less morally accountable when acting under command. Understanding these dynamics can inform training, policy, and ethical guidelines to preserve a sense of responsibility even in structured power systems.

Monday, June 30, 2025

Neural Processes Linking Interoception to Moral Preferences Aligned with Group Consensus

Kim, J., & Kim, H. (2025).
Journal of Neuroscience, e1114242025.

Abstract

Aligning one’s decisions with the prevailing norms and expectations of those around us constitutes a fundamental facet of moral decision-making. When faced with conflicting moral values, one adaptive approach is to rely on intuitive moral preference. While there has been theoretical speculation about the connection between moral preference and an individual’s awareness of introspective interoceptive signals, it has not been empirically examined. This study examines the relationships between individuals’ preferences in moral dilemmas and interoception, measured with self-report, heartbeat detection task, and resting-state fMRI. Two independent experiments demonstrate that both male and female participants’ interoceptive awareness and accuracy are associated with their moral preferences aligned with group consensus. In addition, the fractional occupancies of the brain states involving the ventromedial prefrontal cortex and the precuneus during rest mediate the link between interoceptive awareness and the degree of moral preferences aligned to group consensus. These findings provide empirical evidence of the neural mechanism underlying the link between interoception and moral preferences aligned with group consensus.

Significance statement

We investigate the intricate link between interoceptive ability to perceive internal bodily signals and decision-making when faced with moral dilemmas. Our findings reveal a significant correlation between the accuracy and awareness of interoceptive signals and the degree of moral preferences aligned with group consensus. Additionally, brain states involving the ventromedial prefrontal cortex and precuneus during rest mediate the link between interoceptive awareness and moral preferences aligned with group consensus. These findings provide empirical evidence that internal bodily signals play a critical role in shaping our moral intuitions according to others’ expectations across various social contexts.

Here are some thoughts:

A recent study highlighted that our moral decisions may be influenced by our body's internal signals, particularly our heartbeat. Researchers found that individuals who could accurately perceive their own heartbeats tended to make moral choices aligning with the majority, regardless of whether those choices were utilitarian or deontological. This implies that bodily awareness might unconsciously guide us toward socially accepted norms. Brain scans supported this, showing increased activity in areas associated with evaluation and judgment, like the medial prefrontal cortex, in those more attuned to their internal signals. While the study's participants were exclusively Korean college students, limiting generalizability, the findings open up intriguing possibilities about the interplay between bodily awareness and moral decision-making.

Monday, June 16, 2025

The impact of AI errors in a human-in-the-loop process

Agudo, U., Liberal, K. G., et al. (2024).
Cognitive Research Principles and 
Implications, 9(1).

Abstract

Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.

Here are some thoughts:


This study explores the impact of AI errors in human-in-the-loop processes, where humans and AI systems collaborate in decision-making.  The research specifically investigates how the timing of AI support influences human judgment and decision accuracy.  The findings indicate that human judgment is negatively affected by incorrect algorithmic support, particularly when provided before the human's own judgment, leading to decreased accuracy.  This research highlights the complexities of human-computer interaction in automated decision-making contexts and emphasizes the need for a deeper understanding of how AI support systems can be effectively integrated to minimize errors and biases.    

This is important for psychologists because it sheds light on the cognitive biases and decision-making processes involved when humans interact with AI systems, which is an increasingly relevant area of study in the field.  Understanding these interactions can help psychologists develop interventions and strategies to mitigate negative impacts, such as automation bias, and improve the design of human-computer interfaces to optimize decision-making accuracy and reduce errors in various sectors, including public service, healthcare, and justice. 

Sunday, May 18, 2025

Moral judgement and decision-making: theoretical predictions and null results

Hertz, U., Jia, F., & Francis, K. B. (2023).
Scientific Reports, 13(1).

Abstract

The study of moral judgement and decision making examines the way predictions made by moral and ethical theories fare in real world settings. Such investigations are carried out using a variety of approaches and methods, such as experiments, modeling, and observational and field studies, in a variety of populations. The current Collection on moral judgments and decision making includes works that represent this variety, while focusing on some common themes, including group morality and the role of affect in moral judgment. The Collection also includes a significant number of studies that made theoretically driven predictions and failed to find support for them. We highlight the importance of such null-results papers, especially in fields that are traditionally governed by theoretical frameworks.

Here are some thoughts:

The article explores how predictions from moral theories—particularly deontological and utilitarian ethics—hold up in empirical studies. Drawing from a range of experiments involving moral dilemmas, economic games, and cross-cultural analyses, the authors highlight the increasing importance of null results—findings where expected theoretical effects were not observed.

These outcomes challenge assumptions such as the idea that deontologists are inherently more trusted than utilitarians or that moral responsibility diffuses more in group settings. The studies also show how individual traits (e.g., depression, emotional awareness) and cultural or ideological contexts influence moral decisions.

For practicing psychologists, this research underscores the importance of moving beyond theoretical assumptions toward a more evidence-based, context-sensitive understanding of moral reasoning. It emphasizes the relevance of emotional processes in moral evaluation, the impact of group dynamics, and the necessity of accounting for cultural and psychological diversity in decision-making. Additionally, the article advocates for valuing null results as critical to theory refinement and scientific integrity in the study of moral behavior.

Friday, May 2, 2025

Emotional and Cognitive “Route” in Decision-Making Process: The Relationship between Executive Functions, Psychophysiological Correlates, Decisional Styles, and Personality

Crivelli, D., Acconito, C., & Balconi, M. (2024).
Brain sciences, 14(7), 734.

Abstract

Studies on decision-making have classically focused exclusively on its cognitive component. Recent research has shown that a further essential component of decisional processes is the emotional one. Indeed, the emotional route in decision-making plays a crucial role, especially in situations characterized by ambiguity, uncertainty, and risk. Despite that, individual differences concerning such components and their associations with individual traits, decisional styles, and psychophysiological profiles are still understudied. This pilot study aimed at investigating the relationship between individual propensity toward using an emotional or cognitive information-processing route in decision-making, EEG and autonomic correlates of the decisional performance as collected via wearable non-invasive devices, and individual personality and decisional traits. Participants completed a novel task based on realistic decisional scenarios while their physiological activity (EEG and autonomic indices) was monitored. Self-report questionnaires were used to collect data on personality traits, individual differences, and decisional styles. Data analyses highlighted two main findings. Firstly, different personality traits and decisional styles showed significant and specific correlations, with an individual propensity toward either emotional or cognitive information processing for decision-making. Secondly, task-related EEG and autonomic measures presented a specific and distinct correlation pattern with different decisional styles, maximization traits, and personality traits, suggesting different latent profiles.

Here are some thoughts:

This research provides valuable insights for psychologists by offering a more comprehensive understanding of decision-making, moving beyond a purely cognitive perspective to incorporate the crucial role of emotions and individual differences. It also highlights the importance of individual differences, emphasizing how personality traits and decisional styles influence how people process information and make choices. Furthermore, the research integrates psychological and physiological perspectives by combining self-report data with EEG and autonomic measures, providing a more holistic view of the decision-making process. Ultimately, the findings can inform interventions and applications in various fields, such as clinical psychology, organizational psychology, and consumer behavior, to better understand and support decision-making in different contexts.

Monday, April 14, 2025

Moral Judgment and Decision Making

Bartels, D.  et al.(n.d.).
In The Wiley Blackwell Handbook of
Judgment and Decision Making.

Abstract

This chapter focuses on moral flexibility, a term that the authors use that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices, they really want to get it right, they really want to do the right thing, but context strongly influences which moral beliefs are brought to bear in a given situation. It reviews contemporary research on moral judgment and decision making, and suggests ways that the major themes in the literature relate to the notion of moral flexibility. The chapter explains what makes moral judgment and decision making unique. It also reviews three major research themes and their explananda: morally prohibited value trade-offs in decision making; rules, reason, and emotion in trade-offs; and judgments of moral blame and punishment. The chapter also comments on methodological desiderata and presents understudied areas of inquiry.

Here are some thoughts:

This chapter explores the psychology of moral judgment and decision-making. The authors argue that people are motivated to adhere to moral beliefs, but context strongly influences which beliefs are applied in a given situation, resulting in moral flexibility.  The chapter reviews three major research themes: moral value tradeoffs, the role of rules, reason, and emotion in moral tradeoffs, and judgments of moral blame and punishment.  The authors discuss normative ethical theories, including consequentialism (utilitarianism), deontology, and virtue ethics.  They also examine the influence of protected values and sacred values on moral decision-making, highlighting the conflict between rule-based and consequentialist decision strategies.  Furthermore, the chapter investigates the interplay of emotion, reason, automaticity, and cognitive control in moral judgment, discussing dual-process models, moral grammar, and the reconciliation of rules and emotions.  The authors explore factors influencing moral blame and punishment, including the role of intentions, outcomes, and character evaluations.  The chapter concludes by emphasizing the complexity of moral decision-making and the importance of considering contextual influences. 

Saturday, February 15, 2025

Does One Emotion Rule All Our Ethical Judgments

Elizabeth Kolbert
The New Yorker
Originally published 13 Jan 25

Here is an excerpt:

Gray describes himself as a moral psychologist. In contrast to moral philosophers, who search for abstract principles of right and wrong, moral psychologists are interested in the empirical matter of people’s perceptions. Gray writes, “We put aside questions of how we should make moral judgments to examine how people do make more moral judgments.”

For the past couple of decades, moral psychology has been dominated by what’s known as moral-foundations theory, or M.F.T. According to M.F.T., people reach ethical decisions on the basis of mental structures, or “modules,” that evolution has wired into our brains. These modules—there are at least five of them—involve feelings like empathy for the vulnerable, resentment of cheaters, respect for authority, regard for sanctity, and anger at betrayal. The reason people often arrive at different judgments is that their modules have developed differently, either for individual or for cultural reasons. Liberals have come to rely almost exclusively on their fairness and empathy modules, allowing the others to atrophy. Conservatives, by contrast, tend to keep all their modules up and running.

If you find this theory implausible, you’re not alone. It has been criticized on a wide range of grounds, including that it is unsupported by neuroscience. Gray, for his part, wants to sweep aside moral-foundations theory, plural, and replace it with moral-foundation theory, singular. Our ethical judgments, he suggests, are governed not by a complex of modules but by one overriding emotion. Untold generations of cowering have written fear into our genes, rendering us hypersensitive to threats of harm.

“If you want to know what someone sees as wrong, your best bet is to figure out what they see as harmful,” Gray writes at one point. At another point: “All people share a harm-based moral mind.” At still another: “Harm is the master key of morality.”

If people all have the same ethical equipment, why are ethical questions so divisive? Gray’s answer is that different people fear differently. “Moral disagreements can still arise even if we all share a harm-based moral mind, because liberals and conservatives disagree about who is especially vulnerable to victimization,” he writes.


Here are some thoughts:

Notably, I am a big fan of Kurt Gray and his research. Search this site for multiple articles.

Our moral psychology is deeply rooted in our evolutionary past, particularly in our sensitivity to harm, which was crucial for survival. This legacy continues to influence modern moral and political debates, often leading to polarized views based on differing perceptions of harm. Kurt Gray’s argument that harm is the "master key" of morality simplifies the complex nature of moral judgments, offering a unifying framework while potentially overlooking the nuanced ways in which cultural and individual differences shape moral reasoning. His critique of moral-foundations theory (M.F.T.) challenges the idea that moral judgments are based on multiple innate modules, suggesting instead that a singular focus on harm underpins our moral (and sometime ethical) decisions. This perspective highlights how moral disagreements, such as those over abortion or immigration, arise from differing assumptions about who is vulnerable to harm.

The idea that moral judgments are often intuitive rather than rational further complicates our understanding of moral decision-making. Gray’s examples, such as incestuous siblings or a vegetarian eating human flesh, illustrate how people instinctively perceive harm even when none is evident. This challenges the notion that moral reasoning is based on logical deliberation, emphasizing instead the role of emotion and intuition. Gray’s emphasis on harm-based storytelling as a tool for bridging moral divides underscores the power of narrative in shaping perceptions. However, it also raises concerns about the potential for manipulation, as seen in the use of exaggerated or false narratives in political rhetoric, such as Donald Trump’s fabricated tales of harm.

Ultimately, the article raises important questions about whether our evolved moral psychology is adequate for addressing the complex challenges of the modern world, such as climate change, nuclear weapons, and artificial intelligence. The mismatch between our ancient instincts and contemporary problems may be a significant source of societal tension. Gray’s work invites reflection on how we can better understand and address the roots of moral conflict, while cautioning against the potential pitfalls of relying too heavily on intuitive judgments and emotional narratives. It suggests that while storytelling can foster empathy and bridge divides, it must be used responsibly to avoid exacerbating polarization and misinformation.

Sunday, January 5, 2025

Incompetence & Losing Capacity: Answers to 8 FAQs

Leslie Kernisan
(2024, November 30). Better Health While Aging.

Perhaps your elderly father insists he has no difficulties driving, even though he’s gotten into some fender benders and you find yourself a bit uncomfortable when you ride in the car with him.

Or you’ve worried about your aging aunt giving an alarming amount of money to people who call her on the phone.

Or maybe it’s your older spouse, who has started refusing to take his medication, claiming that it’s poisoned because the neighbor is out to get him.

These situations are certainly concerning, and they often prompt families to ask me if they should be worried about an older adult becoming “incompetent.”

In response, I usually answer that we need to do at least two things:

  • We should assess whether the person has “capacity” to make the decision in question.
  • If there are signs concerning for memory or thinking problems, we should evaluate to determine what might be causing them.
If you’ve been concerned about an older person’s mental wellbeing or ability to make decisions, understanding what clinicians — and lawyers —mean by capacity is hugely important.


The website addresses concerns related to the decision-making capacity of older adults, particularly in light of cognitive impairments such as dementia. It emphasizes the importance of understanding "capacity," which refers to an individual's ability to make informed decisions about specific matters. Dr. Leslie Kernisan outlines that capacity is not a binary state; instead, it is decision-specific and can fluctuate based on health conditions. For example, an older adult may retain the capacity to make simple decisions but struggle with more complex ones, especially if they are experiencing health issues or cognitive decline.

Dr. Kernisan distinguishes between incapacity and incompetence, noting that capacity is typically assessed in clinical settings by healthcare professionals, while competence is a legal determination made by courts. The document explains that various types of decisions—such as medical consent, financial matters, and driving—require different capacities, and the legal standards for these capacities can vary by state.
The article also highlights the impact of Alzheimer's disease and other dementias on decision-making abilities. In early stages, individuals may still have the capacity for many decisions, but as the disease progresses, their ability to make even simple choices may diminish. Therefore, it is crucial for families to seek clinical assessments of capacity when there are concerns about an older adult's decision-making abilities.

Moreover, the document advises that legal determinations of incapacity may be necessary before overriding an older person's decisions, especially in matters concerning safety or financial well-being. Families are encouraged to consult with legal professionals when navigating these issues to ensure they are acting within legal and ethical boundaries.

Overall, the article serves as a practical guide for caregivers and family members dealing with the complexities of aging and cognitive decline, stressing the need for respectful communication and proactive measures to protect the autonomy and safety of older adults.

Tuesday, August 27, 2024

People Reject Free Money and Cheap Deals Because They Infer Phantom Costs

Vonasch, A. J., Mofradidoost, R., & Gray, K. (2024).
Personality and Social Psychology Bulletin, 0(0).

Abstract

If money is good, then shouldn’t more money always be better? Perhaps not. Traditional economic theories suggest that money is an ever-increasing incentivizer. If someone will accept a job for US$20/hr, they should be more likely to accept the same job for US$30/hr and especially for US$250/hr. However, 10 preregistered, high-powered studies (N = 4,205, in the United States and Iran) reveal how increasing incentives can backfire. Overly generous offers lead people to infer “phantom costs” that make them less likely to accept high job wages, cheap plane fares, and free money. We present a theory for understanding when and why people imagine these hidden drawbacks and show how phantom costs drive judgments, impact behavior, and intersect with individual differences. Phantom costs change how we should think about “economic rationality.” Economic exchanges are not merely about money, but instead are social interactions between people trying to perceive (and deceive) each others’ minds.

Significance Statement

This article introduces the concept of “phantom costs,” which explain why incentives backfire. This effect is important for any situation in which incentives are offered, ranging from jobs to governmental policies. The standard model in economics assumes people respond rationally to incentives—for example, people are more likely to accept offers for more money than less money. However, this reveals that people spontaneously appreciate the social context of financial offers—especially overly generous offers. Phantoms costs reveal an important change from the standard model beyond standard heuristics and biases. Phantom costs also provide a framework to make sense of other seemingly paradoxical effects of money and provide an important bridge between behavioral economics and social cognition.

Why is this important for clinical psychologists and mental health professionals?
  1. It provides insights into irrational decision-making and cognitive biases that can contribute to mental health issues like anxiety, obsessive-compulsive disorder, and depression. Understanding these biases can help clinicians better conceptualize and treat certain disorders.
  2. The findings relate to how people evaluate costs/benefits, which is relevant for motivational issues in psychotherapy. Clinicians could apply these concepts to enhance patient motivation by framing treatment recommendations in ways that reduce perceived "phantom costs."
  3. It highlights the role of emotions and intuitive judgments in decision-making, which is important for clinical psychologists to understand when working with patients to modify maladaptive thought patterns and behaviors.
  4. The study touches on consumer psychology, which has applications in areas like health behavior change. Clinicians could use similar framing effects to promote healthier choices by patients.
  5. More broadly, it demonstrates how psychological research can yield counterintuitive insights that challenge assumptions, which is valuable for clinical practice rooted in empirical evidence rather than intuition alone.

Wednesday, August 21, 2024

An investigation of big life decisions

Camilleri, A. R. (2023).
Judgment and Decision Making, 18, e32.

Abstract

What are life’s biggest decisions? In Study 1, I devised a taxonomy comprising 9 decision categories, 58 decision types, and 10 core elements of big decisions. In Study 2, I revealed people’s perceptions of and expectations for the average person’s big life decisions. In the flagship Study 3, 658 participants described their 10 biggest past and future decisions and rated each decision on a variety of decision elements. This research reveals the characteristics of a big life decision, which are the most common, most important, and most positively evaluated big life decisions, when such decisions happen, and which factors predict ‘good’ decisions. This research contributes to knowledge that could help people improve their lives through better decision-making and living with fewer regrets.

Introduction

Life is filled with decisions. Most decisions are small and quickly forgotten but others have long-lasting consequences. The commercial success of popular books dedicated to help readers improve their decision-making (e.g., Duke, Reference Duke2020) highlights our desire to choose better. However, not every decision can be carefully researched and reflected on, nor should it. Such cognitive effort should be reserved for the most important decisions; those that are most likely to be consequential to one’s life—the ‘big’ decisions.

In the still-popular board game The Game of Life—originally created in 1860 by the renowned Milton Bradley—players simulate life by making a series of big decisions about college, jobs, marriage, children, and retirement. Does the game accurately reflect reality? What are life’s biggest decisions? What makes them so big? When do they occur? How can we make a good one? Which of them lead to happiness? Can we accurately predict any of these answers? Given that big decisions are often directly responsible for our health, wealth, and happiness, it is surprising how little attention has been given to understand how people tend to approach them (see Galotti, 2007 for an exception). The assumption that small consequential or big hypothetical decisions studied in the lab are good models for real big life decisions seems dubious given that no lab study can replicate all of the relevant factors nor the substantial consequences (Galotti, 2005).

(cut)

Conclusions

How your life turns out depends critically on a handful of decisions. Given their vital importance for health, wealth, and happiness, surprisingly little attention has been directed to understanding the broad nature of such big life decisions. A better understanding will allow us to be better prepared to make them. This research has taken us some steps forward on that path.

Here are some thoughts:

Psychologists need to understand big life decisions because these decisions critically influence a patient's health, wealth, and overall happiness. This research highlights the significance of these decisions by categorizing and analyzing them, revealing the common characteristics and factors that predict positive outcomes. By understanding how people approach these significant choices, psychologists can better assist patients in improving their decision-making processes. This understanding can help people lead better lives with fewer regrets by making informed decisions that enhance their well-being and satisfaction. Importantly, psychologists are not to make decisions for patients, as that tramples on the patient's autonomy and may be a form of intrusive advocacy, which may harm the patient.

Tuesday, August 20, 2024

What would qualify an artificial intelligence for moral standing?

Ladak, A.
AI Ethics 4, 213–228 (2024).

Abstract

What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should qualify for moral standing. But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience. This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient. After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing. After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs. I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously. However, much uncertainty about these considerations remains, making this an important topic for future research.

Here are some thoughts:

This article explores the criteria for artificial intelligence (AI) to be considered morally significant. While sentience is often seen as a requirement, the author argues that some non-sentient AIs might qualify too. The paper examines different viewpoints and proposes that AIs with complex goals or consciousness, even without sentience, could be morally relevant. This perspective suggests the issue might be broader and more pressing than previously thought. It also highlights the need for AI policies to consider a wider range of AI and for those skeptical of AI sentience to still acknowledge the moral questions it raises. Further research is needed due to the remaining uncertainties.

Monday, August 5, 2024

The true self and decision-making capacity.

Toomey, J., Lewis, J., Hannikainen, I., & Earp, B. D.
(2024). The American Journal of Bioethics, in press.

Jennifer Hawkins (2024) offers two cases that challenge traditional accounts of decision-making capacity, according to which respect for a medical decision turns on an individual’s cognitive capacities at the time the decision is made (Hawkins 2024; Appelbaum and Grisso 1988). In each of her described cases (involving anorexia nervosa and grief, respectively), a patient makes a decision that—although instrumentally rational at the time—does not reflect the patient’s longer-term values due to being in a particular psychological state. Importantly, this state does not impair the patient’s cognition, but rather predisposes them to make a decision that conflicts with their own broader values, beliefs, or desires.

Under traditional understandings of decision-making capacity, the patient’s decision in either case must be followed by healthcare providers, insofar as it was made while in possession of the requisite cognitive abilities. But this, Hawkins suggests, is the wrong outcome. Although core cognitive capacities are necessary for decision-making capacity, they are not on her view sufficient. From her perspective, patients who clear the threshold of cognitive capacity are not entitled to have their decisions followed when there is good evidence they are making a serious prudential mistake while known to have a condition that makes people more likely than typical to make such mistakes.


Here are a few thoughts:

This article highlights a crucial topic regarding decision-making.

Traditionally, a patient's cognitive abilities have been the main focus for evaluating their decision-making capacity. This article challenges that notion by introducing the concept of a patient's "true self". It argues that even if a patient has the cognitive ability to make a decision, it shouldn't be automatically respected if it doesn't reflect their long-term values or identity. This emphasizes the importance of understanding a patient's underlying values and goals when making decisions about their care. The article aligns with the idea that people often judge medical decisions based on a patient's "true self," highlighting the complexity of determining a patient's true wishes.

Finally, the article confronts us with ethical dilemmas where respecting a patient's cognitive decision might conflict with their well-being. This underlines the need for a nuanced approach that considers both cognitive capacity and a patient's identity. In essence, this article encourages mental health professionals to move beyond following just cognitive protocols and strive for a more comprehensive understanding of their patients.

Friday, August 2, 2024

Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque

Peters, U.
AI Ethics 3, 963–974 (2023).

Abstract

Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving.

The article is linked above.

Here are some thoughts:

This article delves into the ethics of transparency in algorithmic decision-making (ADM) versus human decision-making (HDM). While both can be opaque, the author argues HDM offers more trustworthiness due to "mindshaping." This theory suggests that explaining decisions, even if the thought process is unclear, influences future human behavior to align with the explanation. This self-regulation is absent in AI, potentially making opaque ADM less trustworthy. The text emphasizes that explanations serve a purpose beyond just understanding the process. It raises concerns about "deceptive AI" with misleading explanations and warns against underestimating the transparency of HDM due to its inherent ability to explain. Key ethical considerations include the need for further research on mindshaping's impact on bias and the limitations of explanations in both ADM and HDM.  Ultimately, the passage highlights the importance of developing explainable AI that goes beyond mere justification, while also emphasizing fairness, accountability, and responsible use of explanations in building trustworthy AI systems.

Friday, June 7, 2024

Large Language Models as Moral Experts? GPT-4o Outperforms Expert Ethicist in Providing Moral Guidance

Dillion, D., Mondal, D., Tandon, N.,
& Gray, K. (2024, May 29).

Abstract

AI has demonstrated expertise across various fields, but its potential as a moral expert remains unclear. Recent work suggests that Large Language Models (LLMs) can reflect moral judgments with high accuracy. But as LLMs are increasingly used in complex decision-making roles, true moral expertise requires not just aligned judgments but also clear and trustworthy moral reasoning. Here, we advance work on the Moral Turing Test and find that advice from GPT-4o is rated as more moral, trustworthy, thoughtful, and correct than that of the popular The New York Times advice column, The Ethicist. GPT models outperformed both a representative sample of Americans and a renowned ethicist in providing moral explanations and advice, suggesting that LLMs have, in some respects, achieved a level of moral expertise. The present work highlights the importance of carefully programming ethical guidelines in LLMs, considering their potential to sway users' moral reasoning. More promisingly, it suggests that LLMs could complement human expertise in moral guidance and decision-making.


Here are my thoughts:

This research on GPT-4o's moral reasoning is fascinating, but caution is warranted. While exceeding human performance in explanations and perceived trustworthiness is impressive, true moral expertise goes beyond these initial results.

Here's why:

First, there are nuances to all moral dilemmas. Real-world dilemmas often lack clear-cut answers. Can GPT-4o navigate the gray areas and complexities of human experience?

Next, everyone has a rich experience, values, perspectives, and biases.  What ethical framework guides GPT-4o's decisions? Transparency in its programming is crucial.

Finally, the consequences of AI-driven moral advice can be far-reaching. Careful evaluation of potential biases and unintended outcomes is essential.  There is no objective algorithm.  There is no objective morality.  All moral decisions, no matter how well-reasoned, have pluses and minuses.  Therefore, AI can be used as a starting point for decision-making and planning.

Wednesday, May 15, 2024

When should a computer decide? Judicial decision-making in the age of automation, algorithms and generative artificial intelligence

J. Morison and T. McInerney
In S Turenne and M Moussa (eds)
Research Handbook on Judging and the
Judiciary, Edward Elgar Routledge forthcoming 2024.

Abstract

This contribution explores what the activity of judging actually involves and whether it might be replaced by algorithmic technologies, including Large Language Models such as ChatGPT. This involves investigating how algorithmic judging systems operate and might develop, as well as exploring the current limits on using AI in coming to judgment. While it may be accepted that some routine decision can be safely made by machines, others clearly cannot and the focus here is on exploring where and why a decision requires human involvement. This involves considering a range of features centrally involved in judging that may not be capable of being adequately captured by machines. Both the role of judges and wider considerations about the nature and purpose of the legal system are reviewed to support the conclusion that while technology may assist judges, it cannot fully replace them.

Introduction

There is a growing realisation that we may have given away too much to new technologies in general, and to new digital technologies based on algorithms and artificial intelligence (AI) in particular, not to mention the large corporations who largely control these systems. Certainly, as in many other areas, the latest iterations of the tech revolution in the form of ChatGPT and other large language models (LLMs) are
disrupting approaches within law and legal practice, even producing legal judgements.1 This contribution considers a fundamental question about when it is acceptable to use AI in what might be thought of as the essentially human activity of judging disputes. It also explores what ‘acceptable’ means in this context, and tries to establish if there is a bright line where the undoubted value of AI, and the various advantages this may bring, come at too high a cost in terms of what may be lost when the human element is downgraded or eliminated. Much of this involves investigating how algorithmic judging systems operate and might develop, as well as exploring the current limits on using AI in coming to judgment. There are of course some technical arguments here, but the main focus is on what ‘judgment’ in a legal context actually
involves, and what it might not be possible to reproduce satisfactorily in a machine led approach. It is in answering this question that this contribution addresses the themes of this research handbook by attempting to excavate the nature and character of judicial decision-making and exploring the future for trustworthy and accountable judging in an algorithmically driven future. 

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.