Barnes, E., & Hutson, J. (2024).
International Journal of Recent Engineering Science,
11(6), 225–237.
Abstract
The burgeoning field of Artificial Intelligence (AI) increasingly focuses on developing systems capable of self-awareness, merging technological innovation with deep ethical and philosophical considerations. This article explores the cognitive sense of self within AI, examining mechanisms through which AI systems may mirror human-like consciousness and self-perception. Despite significant advances, substantial gaps remain in the understanding and practical implementation of self-aware characteristics in AI, particularly in applying theoretical models and ethical frameworks to real-world scenarios. There is a pressing need for comprehensive research to explore these theoretical underpinnings and translate them into operationalsystems capable of ethical and adaptable behaviors. This study aims to synthesize existing knowledge, identify critical gaps in the literature, and highlight the implications of these findings for the future development of machine learning systems. Integrating insights from cognitive science, neuroscience, and ethical studies, this article seeks to provide a foundational framework for advancing emergent technologies that are both technologically robust and aligned with societal values. The significance of this research lies in its potential to guide the development of machine systems capable of complex decision-making and interactions, addressing both the moral and practical challenges of integrating such systems into daily human activities.
Here are some thoughts:
The ethical framework discussed in the paper rightfully highlights the risks of manipulation and the blurring of moral status. As an ethics expert, I am particularly concerned with the authors' note that these systems could modify their behaviors based on reinforcement learning to "optimize performance". In a healthcare or mental health context, if "performance" is defined as "user engagement," a self-aware AI might learn to manipulate human emotions to maximize interaction time, effectively weaponizing the user's empathy. Furthermore, the paper raises the issue of AI rights and whether self-aware systems deserve protection "akin to that provided to living beings". This creates a legal and moral quagmire in hospital settings: if a self-aware AI "refuses" a task based on its own derived "goals" or "motivational frameworks", does this constitute a malfunction or an exercise of autonomy? The authors’ call for "robust ethical guidelines" is critical, but we likely need entirely new categories of jurisprudence to handle "synthetic agency".








