Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, May 10, 2022

Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness

Anthis, J.R. (2022). 
In: Klimov, V.V., Kelley, D.J. (eds) Biologically 
Inspired Cognitive Architectures 2021. BICA 2021. 
Studies in Computational Intelligence, vol 1032. 
Springer, Cham. 
https://doi.org/10.1007/978-3-030-96993-6_3

Abstract

Many philosophers and scientists claim that there is a ‘hard problem of consciousness’, that qualia, phenomenology, or subjective experience cannot be fully understood with reductive methods of neuroscience and psychology, and that there is a fact of the matter as to ‘what it is like’ to be conscious and which entities are conscious. Eliminativism and related views such as illusionism argue against this. They claim that consciousness does not exist in the ways implied by everyday or scholarly language. However, this debate has largely consisted of each side jousting analogies and intuitions against the other. Both sides remain unconvinced. To break through this impasse, I present consciousness semanticism, a novel eliminativist theory that sidesteps analogy and intuition. Instead, it is based on a direct, formal argument drawing from the tension between the vague semantics in definitions of consciousness such as ‘what it is like’ to be an entity and the precise meaning implied by questions such as, ‘Is this entity conscious?’ I argue that semanticism naturally extends to erode realist notions of other philosophical concepts, such as morality and free will. Formal argumentation from precise semantics exposes these as pseudo-problems and eliminates their apparent mysteriousness and intractability.

From Implications and Concluding Remarks

Perhaps even more importantly, humanity seems to be rapidly developing the capacity to create vastly more intelligent beings than currently exist. Scientists and engineers have already built artificial intelligences from chess bots to sex bots.  Some projects are already aimed at the organic creation of intelligence, growing increasingly large sections of human brains in the laboratory. Such minds could have something we want to call consciousness, and they could exist in astronomically large numbers. Consider if creating a new conscious being becomes as easy as copying and pasting a computer program or building a new robot in a factory. How will we determine when these creations become conscious or sentient?  When do they deserve legal protection or rights? These are important motivators for the study of consciousness, particularly for the attempt to escape the intellectual quagmire that may have grown from notions such as the ‘hard problem’ and ‘problem of other minds’. Andreotta (2020) argues that the project of ‘AI rights’,  including artificial intelligences in the moral circle, is ‘beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the “Hard Problem” of consciousness’. While the extent of the impediment is unclear, a resolution of the ‘hard problem’ such as the one I have presented could make it easier to extend moral concern to artificial intelligences.