Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, March 15, 2025

Understanding and supporting thinking and learning with generative artificial intelligence.

Agnoli, S., & Rapp, D. N. (2024).
Journal of Applied Research in Memory
and Cognition, 13(4), 495–499.

Abstract

Generative artificial intelligence (AI) is ubiquitous, appearing as large language model chatbots that people can query directly and collaborate with to produce output, and as authors of products that people are presented with through a variety of information outlets including, but not limited to, social media. AI has considerable promise for helping people develop expertise and for supporting expert performance, with a host of hedges and caveats to be applied in any related advocations. We propose three sets of considerations and concerns that may prove informative for theoretical discussions and applied research on generative AI as a collaborative thought partner. Each of these considerations is informed and inspired by well-worn psychological research on knowledge acquisition. They are (a) a need to understand human perceptions of and responses to AI, (b) the utility of appraising and supporting people’s control of AI, and (c) the importance of careful attention to the quality of AI output.

Here are some thoughts:

Generative AI, especially Large Language Models (LLMs), can aid human thinking and learning by acquiring knowledge and enhancing expert performance. However, realizing this potential requires considering psychological factors.

Firstly, how humans perceive and respond to AI is crucial. User trust, beliefs, and prior AI experiences influence AI’s effectiveness as a collaborative thought partner. Future research should explore how these perceptions affect AI adoption and learning outcomes.

Secondly, control in human-AI interactions is vital for successful partnerships. Clear roles, expertise, and decision-making authority ensure productive collaboration. Empowering users to customize interactions enhances learning and builds trust. AI output quality plays a central role in learning. Addressing inaccuracies, biases, and “hallucinations” ensures reliability. Research is needed to improve and evaluate AI-generated content, especially for education.

Lastly, the rapid AI evolution requires users to be adaptable and equipped with strong metacognitive skills. Metacognition—thinking about one’s thinking—is crucial for navigating AI interactions. Understanding how users process AI information and designing educational interventions to increase AI awareness are essential steps. By fostering critical thinking and self-regulation, users can better integrate AI-generated insights into their learning processes.

Generative AI holds promise for enhancing human thinking and learning, but its success depends on addressing human factors, ensuring output quality, and promoting adaptability. Integrating psychological insights and emphasizing metacognitive awareness can harness AI responsibly and effectively. This approach fosters a collaborative relationship between humans and AI, where technology augments intelligence without undermining autonomy, advancing knowledge acquisition and learning meaningfully.