Gignac, G. E., & Szodorai, E. T. (2024).
Intelligence, 104, 101832–101832.
Abstract
Achieving a widely accepted definition of human intelligence has been challenging, a situation mirrored by the diverse definitions of artificial intelligence in computer science. By critically examining published definitions, highlighting both consistencies and inconsistencies, this paper proposes a refined nomenclature that harmonizes conceptualizations across the two disciplines. Abstract and operational definitions for human and artificial intelligence are proposed that emphasize maximal capacity for completing novel goals successfully through respective perceptual-cognitive and computational processes. Additionally, support for considering intelligence, both human and artificial, as consistent with a multidimensional model of capabilities is provided. The implications of current practices in artificial intelligence training and testing are also described, as they can be expected to lead to artificial achievement or expertise rather than artificial intelligence. Paralleling psychometrics, ‘AI metrics’ is suggested as a needed computer science discipline that acknowledges the importance of test reliability and validity, as well as standardized measurement procedures in artificial system evaluations. Drawing parallels with human general intelligence, artificial general intelligence (AGI) is described as a reflection of the shared variance in artificial system performances. We conclude that current evidence more greatly supports the observation of artificial achievement and expertise over artificial intelligence. However, interdisciplinary collaborations, based on common understandings of the nature of intelligence, as well as sound measurement practices, could facilitate scientific.
Highlights
• Proposes unified definitions for human and artificial intelligence.
• Distinguishes between artificial achievement/expertise and artificial intelligence.
• Advocates for AI metrics to ensure good quality AI system evaluations.
• Describes artificial general intelligence (AGI) mirroring human general intelligence.
• Evidence currently favours presence of artificial achievement over intelligence.
Here are some thoughts:
This paper is critical in the context of rapid AI acceleration because it establishes a rigorous, interdisciplinary nomenclature to distinguish genuine "artificial intelligence" from what the authors term "artificial achievement" or "expertise". While modern AI developments often focus on the impressive performance of systems on specific benchmarks, this paper highlights that these systems are frequently trained on the very test items used to evaluate them, which violates the fundamental psychological requirement of "novelty" for measuring intelligence. By proposing a harmonized "AI metrics" framework that parallels psychometrics, the authors provide a scientific basis for evaluating whether AI systems possess a multidimensional capacity to solve unpracticed, novel goals rather than simply reflecting extensive data processing and pattern recognition from their training sets. Ultimately, the paper warns that current acceleration may be producing highly specialized expertise rather than true artificial general intelligence (AGI), and it suggests that scientific progress requires moving beyond single-dimension metrics to embrace more complex, hierarchical models of capability.
