Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Learning. Show all posts
Showing posts with label Learning. Show all posts

Tuesday, July 29, 2025

Moral learning and Decision-Making across the lifespan

Lockwood, P. L., Van Den Bos, W., & Dreher, J. (2024).
Annual Review of Psychology.

Abstract

Moral learning and decision-making are crucial throughout our lives, from infancy to old age. Emerging evidence suggests that there are important differences in learning and decision-making in moral situations across the lifespan, and these are underpinned by co-occurring changes in the use of model-based values and theory of mind. Here, we review the decision neuroscience literature on moral choices and moral learning considering four key concepts. We show how in the earliest years, a sense of self/other distinction is foundational. Sensitivity to intention versus outcome is crucial for several moral concepts and is most similar in our earliest and oldest years. Across all ages, basic shifts in the influence of theory of mind and model-free and model-based learning support moral decision-making. Moving forward, a computational approach to key concepts of morality can help provide a mechanistic account and generate new hypotheses to test across the whole lifespan.

Here are some thoughts:

The article highlights that moral learning and decision-making evolve dynamically throughout the lifespan, with distinct patterns emerging at different developmental stages. From early childhood to old age, individuals shift from rule-based moral reasoning toward more complex evaluations that integrate intentions, outcomes, and social context.

Understanding these developmental trajectories is essential for psychologists, as it informs age-appropriate interventions and expectations regarding moral behavior. Neuroscientific findings reveal that key brain regions such as the ventromedial prefrontal cortex (vmPFC), temporoparietal junction (TPJ), and striatum play critical roles in processing empathy, fairness, guilt, and social norms. These insights help explain how neurological impairments or developmental changes can affect moral judgment, particularly useful in clinical and neuropsychological settings.

Social influence also plays a significant role, especially during adolescence, where peer pressure and reputational concerns strongly shape moral decisions. This has practical implications for therapists working with youth, including strategies to build resilience against antisocial influences and promote prosocial behaviors.

The research further explores how deficits in moral learning are linked to antisocial behaviors, psychopathy, and conduct disorders, offering valuable perspectives for forensic psychology and clinical intervention planning.

Lastly, the article emphasizes the importance of cultural sensitivity, noting that moral norms vary across societies and change over time. For practicing psychologists, this underscores the need to adopt culturally informed approaches when assessing and treating clients from diverse backgrounds.

Wednesday, May 7, 2025

The Future of Decisions From Experience: Connecting Real-World Decision Problems to Cognitive Processes

Olschewski,  et al. (2024).
Perspectives on psychological science:
a journal of the Association for Psychological Science, 
19(1), 82–102.

Abstract

In many important real-world decision domains, such as finance, the environment, and health, behavior is strongly influenced by experience. Renewed interest in studying this influence led to important advancements in the understanding of these decisions from experience (DfE) in the last 20 years. Building on this literature, we suggest ways the standard experimental design should be extended to better approach important real-world DfE. These extensions include, for example, introducing more complex choice situations, delaying feedback, and including social interactions. When acting upon experiences in these richer and more complicated environments, extensive cognitive processes go into making a decision. Therefore, we argue for integrating cognitive processes more explicitly into experimental research in DfE. These cognitive processes include attention to and perception of numeric and nonnumeric experiences, the influence of episodic and semantic memory, and the mental models involved in learning processes. Understanding these basic cognitive processes can advance the modeling, understanding and prediction of DfE in the laboratory and in the real world. We highlight the potential of experimental research in DfE for theory integration across the behavioral, decision, and cognitive sciences. Furthermore, this research could lead to new methodology that better informs decision-making and policy interventions.

Here are some thoughts:

The article examines how people make choices based on experience rather than descriptions. Traditional research on decisions from experience (DfE) has relied on simplified experiments with immediate feedback, failing to capture real-world complexities such as delayed consequences, multiple options, and social influences.

The authors highlight the need to expand DfE research to better reflect real-world decision-making in finance, health, and environmental policy. Investment decisions are often shaped by personal experience rather than statistical summaries, climate-related choices involve long-term uncertainty, and healthcare decisions rely on non-numeric experiences such as pain or side effects.

To address these gaps, the article emphasizes incorporating cognitive processes—attention, perception, memory, and learning—into DfE studies. The authors propose more complex experimental designs, including delayed feedback and social interactions, to better understand how people process experience-based information.

Ultimately, they advocate for an interdisciplinary approach linking DfE research with cognitive science, neuroscience, and AI. By doing so, researchers can improve decision-making models and inform policies that help people make better choices in uncertain environments.

Saturday, March 15, 2025

Understanding and supporting thinking and learning with generative artificial intelligence.

Agnoli, S., & Rapp, D. N. (2024).
Journal of Applied Research in Memory
and Cognition, 13(4), 495–499.

Abstract

Generative artificial intelligence (AI) is ubiquitous, appearing as large language model chatbots that people can query directly and collaborate with to produce output, and as authors of products that people are presented with through a variety of information outlets including, but not limited to, social media. AI has considerable promise for helping people develop expertise and for supporting expert performance, with a host of hedges and caveats to be applied in any related advocations. We propose three sets of considerations and concerns that may prove informative for theoretical discussions and applied research on generative AI as a collaborative thought partner. Each of these considerations is informed and inspired by well-worn psychological research on knowledge acquisition. They are (a) a need to understand human perceptions of and responses to AI, (b) the utility of appraising and supporting people’s control of AI, and (c) the importance of careful attention to the quality of AI output.

Here are some thoughts:

Generative AI, especially Large Language Models (LLMs), can aid human thinking and learning by acquiring knowledge and enhancing expert performance. However, realizing this potential requires considering psychological factors.

Firstly, how humans perceive and respond to AI is crucial. User trust, beliefs, and prior AI experiences influence AI’s effectiveness as a collaborative thought partner. Future research should explore how these perceptions affect AI adoption and learning outcomes.

Secondly, control in human-AI interactions is vital for successful partnerships. Clear roles, expertise, and decision-making authority ensure productive collaboration. Empowering users to customize interactions enhances learning and builds trust. AI output quality plays a central role in learning. Addressing inaccuracies, biases, and “hallucinations” ensures reliability. Research is needed to improve and evaluate AI-generated content, especially for education.

Lastly, the rapid AI evolution requires users to be adaptable and equipped with strong metacognitive skills. Metacognition—thinking about one’s thinking—is crucial for navigating AI interactions. Understanding how users process AI information and designing educational interventions to increase AI awareness are essential steps. By fostering critical thinking and self-regulation, users can better integrate AI-generated insights into their learning processes.

Generative AI holds promise for enhancing human thinking and learning, but its success depends on addressing human factors, ensuring output quality, and promoting adaptability. Integrating psychological insights and emphasizing metacognitive awareness can harness AI responsibly and effectively. This approach fosters a collaborative relationship between humans and AI, where technology augments intelligence without undermining autonomy, advancing knowledge acquisition and learning meaningfully.

Monday, September 23, 2024

Generative AI Can Harm Learning

Bastani, H. et al. (July 15, 2024).
Available at SSRN:

Abstract

Generative artificial intelligence (AI) is poised to revolutionize how humans work, and has already demonstrated promise in significantly improving human productivity. However, a key remaining question is how generative AI affects learning, namely, how humans acquire new skills as they perform tasks. This kind of skill learning is critical to long-term productivity gains, especially in domains where generative AI is fallible and human experts must check its outputs. We study the impact of generative AI, specifically OpenAI's GPT-4, on human learning in the context of math classes at a high school. In a field experiment involving nearly a thousand students, we have deployed and evaluated two GPT based tutors, one that mimics a standard ChatGPT interface (called GPT Base) and one with prompts designed to safeguard learning (called GPT Tutor). These tutors comprise about 15% of the curriculum in each of three grades. Consistent with prior work, our results show that access to GPT-4 significantly improves performance (48% improvement for GPT Base and 127% for GPT Tutor). However, we additionally find that when access is subsequently taken away, students actually perform worse than those who never had access (17% reduction for GPT Base). That is, access to GPT-4 can harm educational outcomes. These negative learning effects are largely mitigated by the safeguards included in GPT Tutor. Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.


Here are some thoughts:

The deployment of GPT-based tutors in educational settings presents a cautionary tale. While generative AI tools like ChatGPT can make tasks significantly easier for humans, they also risk deteriorating our ability to effectively learn essential skills. This phenomenon is not new, as previous technologies like typing and calculators have also reduced the need for certain skills. However, ChatGPT's broader intellectual capabilities and propensity for providing incorrect responses make it unique.

Unlike earlier technologies, ChatGPT's unreliability and tendency to provide incorrect responses pose significant challenges. Students may struggle to detect these errors or be unwilling to invest the effort required to verify the accuracy of ChatGPT's responses. This can negatively impact their learning and understanding of critical skills. The text suggests that more work is needed to ensure generative AI enhances education rather than diminishes it.

The findings underscore the importance of critical thinking and media literacy in the age of AI. Educators must be aware of the potential risks and benefits of AI-powered tools and design them to augment human capabilities rather than replace them. Accountability and transparency in AI development and deployment are crucial to mitigating these risks. By acknowledging these challenges, we can harness the potential of AI to enhance education and promote meaningful learning.

Tuesday, May 28, 2024

How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence?

Bower, M., Torrington, J., Lai, J.W.M. et al.
Educ Inf Technol (2024).

Abstract

There has been widespread media commentary about the potential impact of generative Artificial Intelligence (AI) such as ChatGPT on the Education field, but little examination at scale of how educators believe teaching and assessment should change as a result of generative AI. This mixed methods study examines the views of educators (n = 318) from a diverse range of teaching levels, experience levels, discipline areas, and regions about the impact of AI on teaching and assessment, the ways that they believe teaching and assessment should change, and the key motivations for changing their practices. The majority of teachers felt that generative AI would have a major or profound impact on teaching and assessment, though a sizeable minority felt it would have a little or no impact. Teaching level, experience, discipline area, region, and gender all significantly influenced perceived impact of generative AI on teaching and assessment. Higher levels of awareness of generative AI predicted higher perceived impact, pointing to the possibility of an ‘ignorance effect’. Thematic analysis revealed the specific curriculum, pedagogy, and assessment changes that teachers feel are needed as a result of generative AI, which centre around learning with AI, higher-order thinking, ethical values, a focus on learning processes and face-to-face relational learning. Teachers were most motivated to change their teaching and assessment practices to increase the performance expectancy of their students and themselves. We conclude by discussing the implications of these findings in a world with increasingly prevalent AI.


Here is a quick summary:

A recent study surveyed teachers about the impact of generative AI, like ChatGPT, on education. The majority of teachers believed AI would significantly change how they teach and assess students. Interestingly, teachers with more awareness of AI predicted a greater impact, suggesting a potential "ignorance effect."

The study also explored how teachers think education should adapt. The focus shifted towards teaching students how to learn with AI, emphasizing critical thinking, ethics, and the learning process itself. This would involve less emphasis on rote memorization and regurgitation of information that AI can readily generate. Teachers also highlighted the importance of maintaining strong face-to-face relationships with students in this evolving educational landscape.

Sunday, February 18, 2024

Amazon AGI Team Say Their AI is Showing "Emergent Properties"

Noor Al-Sibai
Futurism.com
Originally posted 15 Feb 24

A new Amazon AI model, according to the researchers who built it, is exhibiting language abilities that it wasn't trained on.

In a not-yet-peer-reviewed academic paper, the team at Amazon AGI — which stands for "artificial general intelligence," or human-level AI — say their large language model (LLM) is exhibiting "state-of-the-art naturalness" at conversational text. Per the examples shared in the paper, the model does seem sophisticated.

As the paper indicates, the model was able to come up with all sorts of sentences that, according to criteria crafted with the help of an "expert linguist," showed it was making the types of language leaps that are natural in human language learners but have been difficult to obtain in AI.

Named "Big Adaptive Streamable TTS with Emergent abilities" or BASE TTS, the initial model was trained on 100,000 hours of "public domain speech data," 90 percent in English, to teach it how Americans talk. To test out how large models would need to be to show "emergent abilities," or abilities they were not trained on, the Amazon AGI team trained two smaller models, one on 1,000 hours of speech data and another on 10,000, to see which of the three — if any — exhibited the type of language naturalness they were looking for.


My overall conclusion from the paper linked in the article:

BASE TTS (Text To Speech) represents a significant leap forward in TTS technology, offering superior naturalness, efficiency, and potential for real-world applications like voicing LLM outputs. While limitations exist, the research paves the way for future advancements in multilingual, data-efficient, and context-aware TTS models.

Saturday, August 19, 2023

Reverse-engineering the self

Paul, L., Ullman, T. D., De Freitas, J., & Tenenbaum, J.
(2023, July 8). PsyArXiv
https://doi.org/10.31234/osf.io/vzwrn

Abstract

To think for yourself, you need to be able to solve new and unexpected problems. This requires you to identify the space of possible environments you could be in, locate yourself in the relevant one, and frame the new problem as it exists relative to your location in this new environment. Combining thought experiments with a series of self-orientation games, we explore the way that intelligent human agents perform this computational feat by “centering” themselves: orienting themselves perceptually and cognitively in an environment, while simultaneously holding a representation of themselves as an agent in that environment. When faced with an unexpected problem, human agents can shift their perceptual and cognitive center from one location in a space to another, or “re-center”, in order to reframe a problem, giving them a distinctive type of cognitive flexibility. We define the computational ability to center (and re-center) as “having a self,” and propose that implementing this type of computational ability in machines could be an important step towards building a truly intelligent artificial agent that could “think for itself”. We then develop a conceptually robust, empirically viable, engineering-friendly implementation of our proposal, drawing on well established frameworks in cognition, philosophy, and computer science for thinking, planning, and agency.


The computational structure of the self is a key component of human intelligence. They propose a framework for reverse-engineering the self, drawing on work in cognition, philosophy, and computer science.

The authors argue that the self is a computational agent that is able to learn and think for itself. This agent has a number of key abilities, including:
  • The ability to represent the world and its own actions.
  • The ability to plan and make decisions.
  • The ability to learn from experience.
  • The ability to have a sense of self.
The authors argue that these abilities can be modeled as a POMDP, a type of mathematical model that is used to represent sequential decision-making processes in which the decision-maker does not have complete information about the environment. They propose a number of methods for reverse-engineering the self, including:
  • Using data from brain imaging studies to identify the neural correlates of self-related processes.
  • Using computational models of human decision-making to test hypotheses about the computational structure of the self.
  • Using philosophical analysis to clarify the nature of self-related concepts.
The authors argue that reverse-engineering the self is a promising approach to understanding human intelligence and developing artificial intelligence systems that are capable of thinking for themselves.

Tuesday, August 8, 2023

Predictors and consequences of intellectual humility

Porter, T., Elnakouri, A., Meyers, E.A. et al. 
Nat Rev Psychol 1, 524–536 (2022).

Abstract

In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value.

Importance of intellectual humility

The willingness to recognize the limits of one’s knowledge and fallibility can confer societal and individual benefits, if expressed in the right moment and to the proper extent. This insight echoes the philosophical roots of intellectual humility as a virtue. State and trait intellectual humility have been associated with a range of cognitive, social and personality variables (Table 2). At the societal level, intellectual humility can promote societal cohesion by reducing group polarization and encouraging harmonious intergroup relationships. At the individual level, intellectual humility can have important consequences for wellbeing, decision-making and academic learning.

Notably, empirical research has provided little evidence regarding the generalizability of the benefits or drawbacks of intellectual humility beyond the unique contexts of WEIRD (Western, educated, industrialized, rich and democratic) societies. With this caveat, below is an initial set of findings concerning the implications of possessing high levels of intellectual humility. Unless otherwise specified, the evidence below concerns trait-level intellectual humility. After reviewing these benefits, we consider attempts to improve an individual’s intellectual humility and confer associated benefits.

(cut)

Individual benefits

Intellectual humility might also have direct consequences for individuals’ wellbeing. People who reason about social conflicts in an intellectually humbler manner and consider others’ perspectives (components of wise reasoning) are more likely to report higher levels of life satisfaction and less negative affect compared to people who do not. Leaders who are higher in intellectual humility are also higher in emotional intelligence and receive higher satisfaction ratings from their followers, which suggests that intellectual humility could benefit professional life. Nonetheless, intellectual humility is not associated with personal wellbeing in all contexts: religious leaders who see their religious beliefs as fallible have lower wellbeing relative to leaders who are less intellectually humble in their beliefs.

Intellectual humility might also help people to make well informed decisions. Intellectually humbler people are better able to differentiate between strong and weak arguments, even if those arguments go against their initial beliefs9. Intellectual humility might also protect against memory distortions. Intellectually humbler people are less likely to claim falsely that they have seen certain statements before116. Likewise, intellectually humbler people are more likely to scrutinize misinformation and are more likely to intend to receive the COVID-19 vaccine.

Lastly, intellectual humility is positively associated with knowledge acquisition, learning and educational achievement. Intellectually humbler people are more motivated to learn and more knowledgeable about general facts. Likewise, intellectually humbler high school and university students expend greater effort when learning difficult material, are more receptive to assignment feedback and earn higher grades.

Despite evidence of individual benefits associated with intellectual humility, much of this work is correlational. Thus, associations could be the product of confounding factors such as agreeableness, intelligence or general virtuousness. Longitudinal or experimental studies are needed to address the question of whether and under what circumstances intellectual humility promotes individual benefits. Notably, philosophical theorizing about the situation-specific virtuousness of the construct suggests that high levels of intellectual humility are unlikely to benefit all people in all situations.


What is intellectual humility? Intellectual humility is the ability to recognize the limits of one's knowledge and to be open to new information and perspectives.

Predictors of intellectual humility: There are a number of factors that can predict intellectual humility, including:
  • Personality traits: People who are high in openness to experience and agreeableness are more likely to be intellectually humble.
  • Cognitive abilities: People who are better at thinking critically and evaluating evidence are also more likely to be intellectually humble.
  • Cultural factors: People who live in cultures that value open-mindedness and tolerance are more likely to be intellectually humble.
Consequences of intellectual humility: Intellectual humility has a number of positive consequences, including:
  • Better decision-making: Intellectually humble people are more likely to make better decisions because they are more open to new information and perspectives.
  • Enhanced learning: Intellectually humble people are more likely to learn from their mistakes and to grow as individuals.
  • Stronger relationships: Intellectually humble people are more likely to have strong relationships because they are more willing to listen to others and to consider their perspectives.

Overall, intellectual humility is a valuable trait that can lead to a number of positive outcomes.

Monday, May 22, 2023

New evaluation guidelines for dementia

The Monitor on Psychology
Vol. 54, No. 3
Print Version: Page 40

Updated APA guidelines are now available to help psychologists evaluate patients with dementia and their caregivers with accuracy and sensitivity and learn about the latest developments in dementia science and practice.

APA Guidelines for the Evaluation of Dementia and Age-Related Cognitive Change (PDF, 992KB) was released in 2021 and reflects updates in the field since the last set of guidelines, released in 2011, said geropsychologist and University of Louisville professor Benjamin T. Mast, PhD, ABPP, who chaired the task force that produced the guidelines.

“These guidelines aspire to help psychologists gain not only a high level of technical expertise in understanding the latest science and procedures for evaluating dementia,” he said, “but also have a high level of sensitivity and empathy for those undergoing a life change that can be quite challenging.”

Major updates since 2011 include:

Discussion of new DSM terminology. The new guidelines discuss changes in dementia diagnosis and diagnostic criteria reflected in the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition). In particular, the DSM-5 changed the term “dementia” to “major neurocognitive disorder,” and “mild cognitive impairment” to “minor neurocognitive disorder.” As was true with earlier nomenclature, providers and others amend these terms depending on the cause or causes of the disorder, for example, “major neurocognitive disorder due to traumatic brain injury.” That said, the terms “dementia” and “mild cognitive impairment” are still widely used in medicine and mental health care.

Discussion of new research guidelines. The new guidelines also discuss research advances in the field, in particular the use of biomarkers to detect various forms of dementia. Examples are the use of amyloid imaging—PET scans with a radio tracer that selectively binds to amyloid plaques—and analysis of amyloid and tau in cerebrospinal fluid. While these techniques are still mainly used in major academic medical centers, it is important for clinicians to know about them because they may eventually be used in clinical practice, said Bonnie Sachs, PhD, ABPP, an associate professor and neuropsychologist at Wake Forest University School of Medicine. “These developments change the way we think about things like Alzheimer’s disease, because they show there is a long preclinical asymptomatic phase before people start to show memory problems,” she said.

Sunday, January 1, 2023

The Central Role of Lifelong Learning & Humility in Clinical Psychology

Washburn, J. J., Teachman, B. A., et al. 
(2022). Clinical Psychological Science, 0(0).
https://doi.org/10.1177/21677026221101063

Abstract

Lifelong learning plays a central role in the lives of clinical psychologists. As psychological science advances and evidence-based practices develop, it is critical for clinical psychologists to not only maintain their competencies but to also evolve them. In this article, we discuss lifelong learning as a clinical, ethical, and scientific imperative in the myriad dimensions of the clinical psychologist’s professional life, arguing that experience alone is not sufficient. Attitude is also important in lifelong learning, and we call for clinical psychologists to adopt an intellectually humble stance and embrace “a beginner’s mind” when approaching new knowledge and skills. We further argue that clinical psychologists must maintain and refresh their critical-thinking skills and seek to minimize their biases, especially as they approach the challenges and opportunities of lifelong learning. We intend for this article to encourage psychologists to think differently about how they approach lifelong learning.

Here is an excerpt:

Schwartz (2008) was specifically referencing the importance of teaching graduate students to embrace what they do not know, viewing it as an opportunity instead of a threat. The same is true, perhaps even more so, for psychologists engaging in lifelong learning.

As psychologists progress in their careers, they are told repeatedly that they are experts in their field and sometimes THE expert in their own tiny subfield. Psychologists spend their days teaching others what they know and advising students how to make their own discoveries. But expertise is a double-edged sword. Of course, it serves psychologists well in that they are less likely to repeat past mistakes, but it is a disadvantage if they become too comfortable in their expert role. The Egyptian mathematician, Ptolemy, devised a system based on the notion that the sun revolved around the earth that guided astronomers for centuries until Copernicus proved him wrong. Although Newton devised the laws of physics, Einstein showed that the principles of Newtonian physics were wholly bound by context and only “right” within certain constraints. Science is inherently self-correcting, and the only thing that one can count on is that most of what people believe today will be shown to be wrong in the not-too-distant future. One of the authors (S. D. Hollon) recalls that the two things that he knew for sure coming out of graduate school was that neural tissues do not regenerate and that you cannot inherit acquired characteristics. It turns out that both are wrong. Lifelong learning and the science it is based on require psychologists to continuously challenge their expertise. Before becoming experts, psychologists often experience impostor phenomenon during education and training (Rokach & Boulazreg, 2020). Embracing the self-doubt that comes with feeling like an impostor can motivate lifelong learning, even for areas in which one feels like an expert. This means not only constantly learning about new topics but also recognizing that as psychologists tackle tough problems and their associated research questions, complex and often interdisciplinary approaches are required to develop meaningful answers. It is neither feasible nor desirable to become an expert in all domains. This means that psychologists need to routinely surround themselves with people who make them question or expand their expertise.

Here is the conclusion:

Lifelong learning should, like doctoral programs in clinical psychology, concentrate much more on thinking than training. Lifelong learning must encourage critical and independent thinking in the process of mastering relevant bodies of knowledge and the development of specific skills. Specifically, lifelong learning must reinforce the need for clinical psychologists to reflect carefully and critically on what they read, hear, and say and to think abstractly. Such abstract thinking is as relevant after one’s graduate career as before.

Saturday, November 12, 2022

Loss aversion, the endowment effect, and gain-loss framing shape preferences for noninstrumental information

Litovsky, Y. Loewenstein, G. et al.
PNAS, Vol. 119 | No. 34
August 23, 2022

Abstract

We often talk about interacting with information as we would with a physical good (e.g., “consuming content”) and describe our attachment to personal beliefs in the same way as our attachment to personal belongings (e.g., “holding on to” or “letting go of” our beliefs). But do we in fact value information the way we do objects? The valuation of money and material goods has been extensively researched, but surprisingly few insights from this literature have been applied to the study of information valuation. This paper demonstrates that two fundamental features of how we value money and material goods embodied in Prospect Theory—loss aversion and different risk preferences for gains versus losses—also hold true for information, even when it has no material value. Study 1 establishes loss aversion for noninstrumental information by showing that people are less likely to choose a gamble when the same outcome is framed as a loss (rather than gain) of information. Study 2 shows that people exhibit the endowment effect for noninstrumental information, and so value information more, simply by virtue of “owning” it. Study 3 provides a conceptual replication of the classic “Asian Disease” gain-loss pattern of risk preferences, but with facts instead of human lives, thereby also documenting a gain-loss framing effect for noninstrumental information. These findings represent a critical step in building a theoretical analogy between information and objects, and provide a useful perspective on why we often resist changing (or losing) our beliefs.

Significance

We build on Abelson and Prentice’s conjecture that beliefs are not merely valued as guides to interacting with the world, but as cherished possessions. Extending this idea to information, we show that three key phenomena which characterize the valuation of money and material goods—loss aversion, the endowment effect, and the gain-loss framing effect—also apply to noninstrumental information. We discuss, more generally, how the analogy between noninstrumental information and material goods can help make sense of the complex ways in which people deal with the huge expansion of available information in the digital age.

From the Discussion

Economists have traditionally treated the value of information as derivative of its consequences for decision-making. While prior research on noninstrumental information has shown that this narrow view of information may be incomplete, only a few accounts have attempted to explain intrinsic preferences for information. One such account argues that people seek (or avoid) information inasmuch as doing so helps them maintain their cherished beliefs. Another proposes that people choose which information to seek or avoid by considering how it will impact their actions, affect, and cognition. Yet, outside of the curiosity literature, no existing account of information valuation considers preferences for information that has neither instrumental nor (concrete) hedonic value. By showing that key features of Prospect Theory’s value function also apply to individuals’ valuation of (even noninstrumental) information, the current paper suggests that we may also value information in some of the same fundamental ways that we value physical goods.

Tuesday, August 2, 2022

How to end cancel culture

Jennifer Stefano
Philadelphia Inquirer
Originally posted 25 JUL 22

Here is an excerpt:

Radical politics requires radical generosity toward those with whom we disagree — if we are to remain a free and civil society that does not descend into violence. Are we not a people defined by the willingness to spend our lives fighting against what another has said, but give our lives to defend her right to say it? Instead of being hypersensitive fragilistas, perhaps we could give that good old-fashioned American paradox a try again.

But how? Start by engaging in the democratic process by first defending people’s right to be awful. Then use that right to point out just how awful someone’s words or deeds are. Accept that you have freedom of speech, not freedom from offense. A free society best holds people accountable in the arena of ideas. When we trade debate for the dehumanizing act of cancellation, we head down a dangerous path — even if the person who would be canceled has behaved in a dehumanizing way toward others.

Canceling those with opinions most people deem morally wrong and socially unacceptable (racism, misogyny) leads to a permissiveness in simply labeling speech we do not like as those very things without any reason or recourse. Worse, cancel culture is creating a society where dissenting or unpopular opinions become a risk. Canceling isn’t about debate but dehumanizing.

Speech is free. The consequences are not. Actress Constance Wu attempted suicide after she was canceled in 2019 for publicly tweeting she didn’t love her job on a hit TV show. Her words harmed no one, but she was publicly excoriated for them. Private DMs from her fellow Asian actresses telling her she was a “blight” on the Asian American community made her believe she didn’t deserve to live. Wu didn’t lose her job for her words, but she nearly lost her life.

Cancel culture does more than make the sinner pay a penance. It offers none of the healing redemption necessary for a free and civil society. In America, we have always believed in second chances. It is the basis for the bipartisan work on issues like criminal justice reform. Our achievements here have been a bright spot.

We as a civil society want to give the formerly incarcerated a second chance. How about doing the same for each other?

Friday, July 8, 2022

AI bias can arise from annotation instructions

K. Wiggers & D. Coldeway
TechCrunch
Originally posted 8 MAY 22

Here is an excerpt:

As it turns out, annotators’ predispositions might not be solely to blame for the presence of bias in training labels. In a preprint study out of Arizona State University and the Allen Institute for AI, researchers investigated whether a source of bias might lie in the instructions written by dataset creators to serve as guides for annotators. Such instructions typically include a short description of the task (e.g., “Label all birds in these photos”) along with several examples.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems, or AI systems that can classify, summarize, translate and otherwise analyze or manipulate text. In studying the task instructions provided to annotators that worked on the datasets, they found evidence that the instructions influenced the annotators to follow specific patterns, which then propagated to the datasets. For example, over half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), start with the phrase “What is the name,” a phrase present in a third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought. Indeed, the co-authors found that instruction bias overestimates the performance of systems and that these systems often fail to generalize beyond instruction patterns.

The silver lining is that large systems, like OpenAI’s GPT-3, were found to be generally less sensitive to instruction bias. But the research serves as a reminder that AI systems, like people, are susceptible to developing biases from sources that aren’t always obvious. The intractable challenge is discovering these sources and mitigating the downstream impact.



Tuesday, June 28, 2022

You Think Failure Is Hard? So Is Learning From It

Eskreis-Winkler, L., & Fishbach, A. (2022).
Perspectives on Psychological Science. 
https://doi.org/10.1177/17456916211059817

Abstract

Society celebrates failure as a teachable moment. But do people actually learn from failure? Although lay wisdom suggests people should, a review of the research suggests that this is hard. We present a unifying framework that points to emotional and cognitive barriers that make learning from failure difficult. Emotions undermine learning because people find failure ego-threatening. People tend to look away from failure and not pay attention to it to protect their egos. Cognitively, people also struggle because the information in failure is less direct than the information in success and thus harder to extract. Beyond identifying barriers, this framework suggests inroads by which barriers might be addressed. Finally, we explore implications. We outline what, exactly, people miss out on when they overlook the information in failure. We find that the information in failure is often high-quality information that can be used to predict success.

Conclusion

From a young age, we are told that there is information in failure, and we ought to learn from it. Yet, people struggle to see the information in failure. As a result, they struggle to learn.  We present a unifying framework that identifies the emotional and cognitive barriers that make it difficult for people to learn from failure.

Understanding these barriers is especially important when one considers the information in failure. The information in failure is both rich and unique—indeed it is often richer, more informative, and more useful than the information in success.

What to do in a world where the information in failure is rich, yet people struggle to see it? One recommendation is to explore the solutions that we propose here. Remove the ego from failure, shore up the ego so it can tolerate failure, and ease the cognitive burdens of learning from failure to promote it in practice and through culture. We believe such techniques are well worth understanding and investing in, since there is so much to learn from the information in failure when we see it.

Sunday, April 17, 2022

Leveraging artificial intelligence to improve people’s planning strategies

F. Callaway, et al.
PNAS, 2022, 119 (12) e2117432119 

Abstract

Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

Significance

Many bad decisions and their devastating consequences could be avoided if people used optimal decision strategies. Here, we introduce a principled computational approach to improving human decision making. The basic idea is to give people feedback on how they reach their decisions. We develop a method that leverages artificial intelligence to generate this feedback in such a way that people quickly discover the best possible decision strategies. Our empirical findings suggest that a principled computational approach leads to improvements in decision-making competence that transfer to more difficult decisions in more complex environments. In the long run, this line of work might lead to apps that teach people clever strategies for decision making, reasoning, goal setting, planning, and goal achievement.

From the Discussion

We developed an intelligent system that automatically discovers optimal decision strategies and teaches them to people by giving them metacognitive feedback while they are deciding what to do. The general approach starts from modeling the kinds of decision problems people face in the real world along with the constraints under which those decisions have to be made. The resulting formal model makes it possible to leverage artificial intelligence to derive an optimal decision strategy. To teach people this strategy, we then create a simulated decision environment in which people can safely and rapidly practice making those choices while an intelligent tutor provides immediate, precise, and accurate feedback on how they are making their decision. As described above, this feedback is designed to promote metacognitive reinforcement learning.

Monday, September 6, 2021

Paranoia and belief updating during the COVID-19 crisis

Suthaharan, P., Reed, E.J., Leptourgos, P. et al. 
Nat Hum Behav (2021). 
https://doi.org/10.1038/s41562-021-01176-8

Abstract

The COVID-19 pandemic has made the world seem less predictable. Such crises can lead people to feel that others are a threat. Here, we show that the initial phase of the pandemic in 2020 increased individuals’ paranoia and made their belief updating more erratic. A proactive lockdown made people’s belief updating less capricious. However, state-mandated mask-wearing increased paranoia and induced more erratic behaviour. This was most evident in states where adherence to mask-wearing rules was poor but where rule following is typically more common. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable. People who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines and the QAnon conspiracy theories. These beliefs were associated with erratic task behaviour and changed priors. Taken together, we found that real-world uncertainty increases paranoia and influences laboratory task behaviour.

Discussion

The COVID-19 pandemic has been associated with increased paranoia. The increase was less pronounced in states that enforced a more proactive lockdown and more pronounced at reopening in states that mandated mask-wearing. Win-switch behaviour and volatility priors tracked these changes in paranoia with policy. We explored cultural variations in rule following (CTL) as a possible contributor to the increased paranoia that we observed. State tightness may originate in response to threats such as natural disasters, disease, territorial and ideological conflict. Tighter states typically evince more coordinated threat responses. They have also experienced greater mortality from pneumonia and influenza throughout their history. However, paranoia was highest in tight states with a mandate, with lower mask adherence during reopening. It may be that societies that adhere rigidly to rules are less able to adapt to unpredictable change. Alternatively, these societies may prioritize protection from ideological and economic threats over a public health crisis or perhaps view the disease burden as less threatening.

Wednesday, July 15, 2020

Empathy is both a trait and a skill. Here's how to strengthen it.

Kristen Rogers
CNN.com
Originally posted 24 June 20

Here is an excerpt:

Types of empathy

Empathy is more about looking for a common humanity, while sympathy entails feeling pity for someone's pain or suffering, Konrath said.

"Whereas empathy is the ability to perceive accurately what another person is feeling, sympathy is compassion or concern stimulated by the distress of another," Lerner said. "A common example of empathy is accurately detecting when your child is afraid and needs encouragement. A common example of sympathy is feeling sorry for someone who has lost a loved one."

(cut)

A "common mistake is to leap into sympathy before empathically understanding what another person is feeling," Lerner said. Two types of empathy can prevent that relationship blunder.

Emotional empathy, sometimes called compassion, is more intuitive and involves care and concern for others.

Cognitive empathy requires effort and more systematic thinking, so it may lead to more empathic accuracy, Lerner said.  It entails considering others' and their perspectives and imagining what it's like to be them, Konrath added.

Some work managers and colleagues, for example, have had to practice empathy for parents juggling remote work with child care and virtual learning duties, said David Anderson, senior director of national programs and outreach at the Child Mind Institute….   But since the outset of the pandemic in March, that empathy has faded — reflecting the notion that cognitive empathy does take effort.

It takes work to interpret what someone is feeling by all of his cues: facial expressions, tones of voice, posture, words and more. Then you have to connect those cues with what you know about him and the situation in order to accurately infer his feelings.

"This kind of inference is a highly complex social-cognitive task" that might involve a variation of mental processes, Lerner said.

The info is here.

Thursday, June 18, 2020

Measuring Information Preferences

E. H. Ho, D. Hagmann, & G. Loewenstein
Management Science
Published Online:13 Mar 2020

Abstract

Advances in medical testing and widespread access to the internet have made it easier than ever to obtain information. Yet, when it comes to some of the most important decisions in life, people often choose to remain ignorant for a variety of psychological and economic reasons. We design and validate an information preferences scale to measure an individual’s desire to obtain or avoid information that may be unpleasant but could improve future decisions. The scale measures information preferences in three domains that are psychologically and materially consequential: consumer finance, personal characteristics, and health. In three studies incorporating responses from over 2,300 individuals, we present tests of the scale’s reliability and validity. We show that the scale predicts a real decision to obtain (or avoid) information in each of the domains as well as decisions from out-of-sample, unrelated domains. Across settings, many respondents prefer to remain in a state of active ignorance even when information is freely available. Moreover, we find that information preferences are a stable trait but that an individual’s preference for information can differ across domains.

General Discussion

Making good decisions is often contingent on obtaining information, even when that
information is uncertain and has the potential to produce unhappiness. Substantial empirical
evidence suggests that people are often ready to make worse decisions in the service of avoiding
potentially painful information. We propose that this tendency to avoid information is a trait that
is separate from those measured previously, and developed a scale to measure it. The scale asks
respondents to imagine how they would respond to a variety of hypothetical decisions involving
information acquisition/avoidance. The predictive validity of the IPS appears to be largely driven
by its domain items, and although it incorporates domain-specific subscales, it appears to be
sufficiently universal to capture preferences for information in a broad range of domains.

The research is here.

We already knew, to some extent, that there are cases where people avoid information.  This is important in psychotherapy in which avoidance promotes confirmatory hypothesis testing, which enhances overconfidence.  We need to help people embrace information that may be inconsistent or incongruent with their worldview.

Monday, March 16, 2020

Video Games Need More Complex Morality Systems

Hayes Madsen
screenrant.com
Originally published 26 Feb 20

Hereis an excerpt:

Perhaps a bigger issue is the simple fact that games separate decisions into these two opposed ideas. There's a growing idea that games need to represent morality as shades of grey, rather than black and white. Titles like The Witcher 3 further this effort by trying to make each conflict not have a right or wrong answer, as well as consequences, but all too often the neutral path is ignored. Even with multiple moral options, games generally reward players for being good or evil. Take inFamous for example, as making moral choices rewards you with good or bad karma, which in turn unlocks new abilities and powers. The problem here is that great powers are locked away for players on either end, cordoning off gameplay based on your moral choices.

Video games need to make more of an effort to make any choice matter for players, and if they decide to go back and forth between good and evil, that should be represented, not discouraged. Things are seldom black and white, and for games to represent that properly there needs to be incentive across the board, whether the player wants to be good, evil, or anything in between.

Moral choices can shape the landscape of game worlds, even killing characters or entire races. Yet, choices don't always need to be so dramatic or earth-shattering. Characterization is important for making huge decisions, but the smaller day-to-day decisions often have a bigger impact on fleshing out characters.

The info is here.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.