Hagendorff, T. et al. (2023).
arXiv (Cornell University).
Abstract
Large language models (LLMs) show increasingly advanced emergent capabilities and are being incorporated across various societal domains. Understanding their behavior and reasoning abilities therefore holds significant importance. We argue that a fruitful direction for research is engaging LLMs in behavioral experiments inspired by psychology that have traditionally been aimed at understanding human cognition and behavior. In this article, we highlight and summarize theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table. It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks and focuses instead on computational insights that move us toward a better understanding and discovery of emergent abilities and behavioral patterns in LLMs. We review existing work taking this approach, synthesize best practices, and highlight promising future directions. We also highlight the important caveats of applying methodologies designed for understanding humans to machines. We posit that leveraging tools from experimental psychology to study AI will become increasingly valuable as models evolve to be more powerful, opaque, multi-modal, and integrated into complex real-world settings.
Here are some thoughts:
Machine psychology is an emerging field that aims to understand the complex behaviors of large language models (LLMs) by applying experimental methods traditionally used in psychology. By treating LLMs as participants in psychological experiments, researchers can gain valuable insights into their reasoning, decision-making, and potential biases. This approach goes beyond traditional performance metrics, focusing instead on uncovering the underlying mechanisms of LLM behavior. While caution is necessary to avoid over-humanizing these models, the careful application of psychological concepts can significantly enhance our ability to explain, predict, and safely develop LLMs.