Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Consciousness. Show all posts
Showing posts with label Consciousness. Show all posts

Tuesday, January 18, 2022

MIT Researchers Just Discovered an AI Mimicking the Brain on Its Own

Eric James Beyer
Interesting Engineering
Originally posted 18 DEC 21

Here is an excerpt:

In the wake of these successes, Martin began to wonder whether or not the same principle could be applied to higher-level cognitive functions like language processing. 

“I said, let’s just look at neural networks that are successful and see if they’re anything like the brain. My bet was that it would work, at least to some extent.”

To find out, Martin and colleagues compared data from 43 artificial neural network language models against fMRI and ECoG neural recordings taken while subjects listened to or read words as part of a text. The AI models the group surveyed covered all the major classes of available neural network approaches for language-based tasks. Some of them were more basic embedding models like GloVe, which clusters semantically similar words together in groups. Others, like the models known as GPT and BERT, were far more complex. These models are trained to predict the next word in a sequence or predict a missing word within a certain context, respectively. 

“The setup itself becomes quite simple,” Martin explains. “You just show the same stimuli to the models that you show to the subjects [...]. At the end of the day, you’re left with two matrices, and you test if those matrices are similar.”

And the results? 

“I think there are three-and-a-half major findings here,” Schrimpf says with a laugh. “I say ‘and a half’ because the last one we still don’t fully understand.”

Machine learning that mirrors the brain

The finding that sticks out to Martin most immediately is that some of the models predict neural data extremely well. In other words, regardless of how good a model was at performing a task, some of them appear to resemble the brain’s cognitive mechanics for language processing. Intriguingly, the team at MIT identified the GPT model variants as the most brain-like out of the group they looked at.

Friday, November 5, 2021

Invisible gorillas in the mind: Internal inattentional blindness and the prospect of introspection training

Morris, A. (2021, September 26).

Abstract

Much of high-level cognition appears inaccessible to consciousness. Countless studies have revealed mental processes -- like those underlying our choices, beliefs, judgments, intuitions, etc. -- which people do not notice or report, and these findings have had a widespread influence on the theory and application of psychological science. However, the interpretation of these findings is uncertain. Making an analogy to perceptual consciousness research, I argue that much of the unconsciousness of high-level cognition is plausibly due to internal inattentional blindness: missing an otherwise consciously-accessible internal event because your attention was elsewhere. In other words, rather than being structurally unconscious, many higher mental processes might instead be "preconscious", and would become conscious if a person attended to them. I synthesize existing indirect evidence for this claim, argue that it is a foundational and largely untested assumption in many applied interventions (such as therapy and mindfulness practices), and suggest that, with careful experimentation, it could form the basis for a long-sought-after science of introspection training.

Conclusion

Just as people can miss perceptual events due to external inattention, so may they be blind to internal events – like those constituting high-level mental processes – due to internal inattention. The existence of internal inattentional blindness, and the possibility of overcoming it through training, are widely assumed in successful applied psychological practices and widely reported by practitioners; yet these possibilities have rarely been explored experimentally, or taken seriously by basic theorists. Rigorously demonstrating the existence of IIB could open a new chapter both in the development of psychological interventions, and in our understanding of the scope of conscious awareness.


Attention Therapists: Some very relevant information here.

Friday, September 3, 2021

What is consciousness, and could machines have it?

S. Dahaene, H. Lau, & S. Kouider
Science  27 Oct 2017:
Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

From Concluding remarks

Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.

We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?

Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. 

Monday, July 5, 2021

When Do Robots have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Nahmias, E., Allen, C. A., & Loveall, B.
In Free Will, Causality, & Neuroscience
Chapter 3

Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond to one of these robots if you were to discover them attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?

If you’re like most people, you are more likely to regard these robots as having free will and being morally responsible if you believe that they are conscious rather than non-conscious. That is, if you think that the robots actually experience sensations and emotions, you are more likely to regard them as having free will and being morally responsible than if you think they simply behave like humans based on their internal programming but with no conscious experiences at all. But why do many people have this intuition? Philosophers and scientists typically assume that there is a deep connection between consciousness and free will, but few have developed theories to explain this connection. To the extent that they have, it’s typically via some cognitive capacity thought to be important for free will, such as reasoning or deliberation, that consciousness is supposed to enable or bolster, at least in humans. But this sort of connection between consciousness and free will is relatively weak. First, it’s contingent; given our particular cognitive architecture, it holds, but if robots or aliens could carry out the relevant cognitive capacities without being conscious, this would suggest that consciousness is not constitutive of, or essential for, free will. Second, this connection is derivative, since the main connection goes through some capacity other than consciousness. Finally, this connection does not seem to be focused on phenomenal consciousness (first-person experience or qualia), but instead, on access consciousness or self-awareness (more on these distinctions below).

From the Conclusion

In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions. No matter how intelligent or complex their behavior, they do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own death or others, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation, or our feeling such attitudes towards them. Perhaps the authors of these works recognize that the robots, and their stories, become most interesting when they seem to have free will, and people will see them as free when they start to care about what happens to them, when things really matter to them, which results from their experiencing the actual (and potential) outcomes of their actions.


Sunday, June 20, 2021

Artificial intelligence research may have hit a dead end

Thomas Nail
salon.com
Originally published 30 April 21

Here is an excerpt:

If it's true that cognitive fluctuations are requisite for consciousness, it would also take time for stable frequencies to emerge and then synchronize with one another in resting states. And indeed, this is precisely what we see in children's brains when they develop higher and more nested neural frequencies over time.

Thus, a general AI would probably not be brilliant in the beginning. Intelligence evolved through the mobility of organisms trying to synchronize their fluctuations with the world. It takes time to move through the world and learn to sync up with it. As the science fiction author Ted Chiang writes, "experience is algorithmically incompressible." 

This is also why dreaming is so important. Experimental research confirms that dreams help consolidate memories and facilitate learning. Dreaming is also a state of exceptionally playful and freely associated cognitive fluctuations. If this is true, why should we expect human-level intelligence to emerge without dreams? This is why newborns dream twice as much as adults, if they dream during REM sleep. They have a lot to learn, as would androids.

In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

Véliz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.

Wednesday, March 10, 2021

Thought-detection: AI has infiltrated our last bastion of privacy

Gary Grossman
VentureBeat
Originally posted 13 Feb 21

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.


Wednesday, March 3, 2021

Evolutionary biology meets consciousness: essay review

Browning, H., Veit, W. 
Biol Philos 36, 5 (2021). 
https://doi.org/10.1007/s10539-021-09781-7

Abstract

In this essay, we discuss Simona Ginsburg and Eva Jablonka’s The Evolution of the Sensitive Soul from an interdisciplinary perspective. Constituting perhaps the longest treatise on the evolution of consciousness, Ginsburg and Jablonka unite their expertise in neuroscience and biology to develop a beautifully Darwinian account of the dawning of subjective experience. Though it would be impossible to cover all its content in a short book review, here we provide a critical evaluation of their two key ideas—the role of Unlimited Associative Learning in the evolution of, and detection of, consciousness and a metaphysical claim about consciousness as a mode of being—in a manner that will hopefully overcome some of the initial resistance of potential readers to tackle a book of this length.

Here is one portion:

Modes of being

The second novel idea within their book is to conceive of consciousness as a new mode of being, rather than a mere trait. This part of their argument may appear unusual to many operating in the debate, not the least because this formulation—not unlike their choice to include Aristotle’s sensitive soul in the title—evokes a sense of outdated and strange metaphysics. We share some of this opposition to this vocabulary, but think it best conceived as a metaphor.

They begin their book by introducing the idea of teleological (goal-directed) systems and the three ‘modes of being’, taken from the works of Aristotle, each of which is considered to have a unique telos (goal). These are: life (survival/reproduction), sentience (value ascription to stimuli), and rationality (value ascription to concepts). The focus of this book is the second of these—the “sensitive soul”. Rather than a trait, such as vision, G&J see consciousness as a mode of being, in the same way as the emergence of life and rational thought also constitute new modes of being.

In several places throughout their book, G&J motivate their account through this analogy, i.e. by drawing a parallel from consciousness to life and/or rationality. Neither, they think, can be captured in a simple definition or trait, thus explaining the lack of progress on trying to come up with definitions for these phenomena. Compare their discussion of the distinction between life and non-life. Life, they argue, is not a functional trait that organisms possess, but rather a new way of being that opens up new possibilities; so too with consciousness. It is a new form of biological organization at a level above the organism that gives rise to a “new type of goal-directed system”, one which faces a unique set of challenges and opportunities. They identify three such transitions—the transition from non-life to life (the “nutritive soul”), the transition from non-conscious to conscious (the “sensitive soul”) and the transition from non-rational to rational (the “rational soul”). All three transitions mark a change to a new form of being, one in which the types of goals change. But while this is certainly correct in the sense of constituting a radical transformation in the kinds of goal-directed systems there are, we have qualms with the idea that this formal equivalence or abstract similarity can be used to ground more concrete properties. Yet G&J use this analogy to motivate their UAL account in parallel to unlimited heredity as a transition marker of life.

Sunday, February 14, 2021

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Frank, L., Nyholm, S. 
Artif Intell Law 25, 305–323 (2017).
https://doi.org/10.1007/s10506-017-9212-y

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

Here is an excerpt:

Here, we want to ask a similar question regarding how and whether sex robots should be brought into the legal community. Our overarching question is: is it conceivable, possible, and desirable to create autonomous and smart sex robots that are able to give (or withhold) consent to sex with a human person? For each of these three sub-questions (whether it is conceivable, possible, and desirable to create sex robots that can consent) we consider both “no” and “yes” answers. We are here mainly interested in exploring these questions in general terms and motivating further discussion. However, in discussing each of these sub-questions we will argue that, prima facie, the “yes” answers appear more convincing than the “no” answers—at least if the sex robots are of a highly sophisticated sort.Footnote4

The rest of our discussion divides into the following sections. We start by saying a little more about what we understand by a “sex robot”. We also say more about what consent is, and we review the small literature that is starting to emerge on our topic (Sect. 1). We then turn to the questions of whether it is conceivable, possible, and desirable to create sex robots capable of giving consent—and discuss “no” and “yes” answers to all of these questions. When we discuss the case for considering it desirable to require robotic consent to sex, we argue that there can be both non-instrumental and instrumental reasons in favor of such a requirement (Sects. 2–4). We conclude with a brief summary (Sect. 5).

Thursday, February 4, 2021

Robust inference of positive selection on regulatory sequences in the human brain

J. Liu & M. Robison-Rechavi
Science Advances  27 Nov 2020:
Vol. 6, no. 48, eabc9863

Abstract

A longstanding hypothesis is that divergence between humans and chimpanzees might have been driven more by regulatory level adaptations than by protein sequence adaptations. This has especially been suggested for regulatory adaptations in the evolution of the human brain. We present a new method to detect positive selection on transcription factor binding sites on the basis of measuring predicted affinity change with a machine learning model of binding. Unlike other methods, this approach requires neither defining a priori neutral sites nor detecting accelerated evolution, thus removing major sources of bias. We scanned the signals of positive selection for CTCF binding sites in 29 human and 11 mouse tissues or cell types. We found that human brain–related cell types have the highest proportion of positive selection. This result is consistent with the view that adaptive evolution to gene regulation has played an important role in evolution of the human brain.

Summary:

With only 1 percent difference, the human and chimpanzee protein-coding genomes are remarkably similar. Understanding the biological features that make us human is part of a fascinating and intensely debated line of research. Researchers have developed a new approach to pinpoint adaptive human-specific changes in the way genes are regulated in the brain.

Friday, January 1, 2021

The weirdness of belief in free will

Berniūnas, R, et al.
Consciousness and Cognition
Volume 87, January 2021, 103054

Abstract

It has been argued that belief in free will is socially consequential and psychologically universal. In this paper we look at the folk concept of free will and its critical assessment in the context of recent psychological research. Is there a widespread consensus about the conceptual content of free will? We compared English “free will” with its lexical equivalents in Lithuanian, Hindi, Chinese and Mongolian languages and found that unlike Lithuanian, Chinese, Hindi and Mongolian lexical expressions of “free will” do not refer to the same concept free will. What kind people have been studied so far? A review of papers indicate that, overall, 91% of participants in studies on belief in free will were WEIRD. Thus, given that free will has no cross-culturally universal conceptual content and that most of the reviewed studies were based on WEIRD samples, belief in free will is not a psychological universal.

Highlights

• The concept of free will has no cross-culturally universal conceptual content.

• Most of the reviewed studies on belief in free will were based on WEIRD samples.

• The term “free will” is inadequate for cross-cultural research.

From the General Discussion

Unfortunately, there has been little effort in cross-cultural (construct and external) validation of the very concept of free will. In explicating the folk concept of free will, Monroe and Malle (2010) showed that the ability to make decisions and choice are the most prototypical features (see also Feldman, 2017; Feldman et al., 2014). However, this is a description only of intuitions of English speaking participants. Here we tested whether there is a widespread consensus about the conceptual content (of free will) across culturally and linguistically diverse samples — hence, universality and cultural hypotheses. Overall, on the basis of free-listing results, it could be argued that two lexical expressions of English “free will” and Lithuanian “laisva valia” refer to the same concept of free will. Whereas Chinese ziyou yizhi, Hindi svatantra icchā, and Mongolian chölöötei khüsel, as newly constructed lexical expressions of “free will”, do not refer to the same concept of free will.

Wednesday, December 16, 2020

If a robot is conscious, is it OK to turn it off? The moral implications of building true AIs

Anand Vaidya
The Conversation
Originally posted 27 Oct 20

Here is an excerpt:

There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.

In contrast, there’s also access consciousness. That’s the ability to report, reason, behave and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.

Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted – an example of access consciousness without phenomenal consciousness.

Data is an android. How do these distinctions play out with respect to him?

The Data dilemma

The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.

Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.

He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.

However, Data most likely lacks phenomenal consciousness - he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness – can grab the pen – but across all his senses he lacks phenomenal consciousness.

Thursday, December 3, 2020

The psychologist rethinking human emotion

David Shariatmadari
The Guardian
Originally posted 25 Sept 20

Here is an excerpt:

Barrett’s point is that if you understand that “fear” is a cultural concept, a way of overlaying meaning on to high arousal and high unpleasantness, then it’s possible to experience it differently. “You know, when you have high arousal before a test, and your brain makes sense of it as test anxiety, that’s a really different feeling than when your brain makes sense of it as energised determination,” she says. “So my daughter, for example, was testing for her black belt in karate. Her sensei was a 10th degree black belt, so this guy is like a big, powerful, scary guy. She’s having really high arousal, but he doesn’t say to her, ‘Calm down’; he says, ‘Get your butterflies flying in formation.’” That changed her experience. Her brain could have made anxiety, but it didn’t, it made determination.”

In the lectures Barrett gives to explain this model, she talks of the brain as a prisoner in a dark, silent box: the skull. The only information it gets about the outside world comes via changes in light (sight), air pressure (sound) exposure to chemicals (taste and smell), and so on. It doesn’t know the causes of these changes, and so it has to guess at them in order to decide what to do next.

How does it do that? It compares those changes to similar changes in the past, and makes predictions about the current causes based on experience. Imagine you are walking through a forest. A dappled pattern of light forms a wavy black shape in front of you. You’ve seen many thousands of images of snakes in the past, you know that snakes live in the forest. Your brain has already set in train an array of predictions.

The point is that this prediction-making is consciousness, which you can think of as a constant rolling process of guesses about the world being either confirmed or proved wrong by fresh sensory inputs. In the case of the dappled light, as you step forward you get information that confirms a competing prediction that it’s just a stick: the prediction of a snake was ultimately disproved, but not before it grew so strong that neurons in your visual cortex fired as though one was actually there, meaning that for a split second you “saw” it. So we are all creating our world from moment to moment. If you didn’t, your brain wouldn’t be able make the changes necessary for your survival quickly enough. If the prediction “snake” wasn’t already in train, then the shot of adrenaline you might need in order to jump out of its way would come too late.

Wednesday, October 14, 2020

‘Disorders of consciousness’: Understanding ‘self’ might be the greatest scientific challenge of our time

new scientist full
Joel Frohlich
Genetic Literacy report
Originally published 18 Sept 20

Here are two excerpts:

Just as life stumped biologists 100 years ago, consciousness stumps neuroscientists today. It’s far from obvious why some brain regions are essential for consciousness and others are not. So Tononi’s approach instead considers the essential features of a conscious experience. When we have an experience, what defines it? First, each conscious experience is specific. Your experience of the colour blue is what it is, in part, because blue is not yellow. If you had never seen any colour other than blue, you would most likely have no concept or experience of colour. Likewise, if all food tasted exactly the same, taste experiences would have no meaning, and vanish. This requirement that each conscious experience must be specific is known as differentiation.

But, at the same time, consciousness is integrated. This means that, although objects in consciousness have different qualities, we never experience each quality separately. When you see a basketball whiz towards you, its colour, shape and motion are bound together into a coherent whole. During a game, you’re never aware of the ball’s orange colour independently of its round shape or its fast motion. By the same token, you don’t have separate experiences of your right and your left visual fields – they are interdependent as a whole visual scene.

Tononi identified differentiation and integration as two essential features of consciousness. And so, just as the essential features of life might lead a scientist to infer the existence of DNA, the essential features of consciousness led Tononi to infer the physical properties of a conscious system.

(cut)

Consciousness might be the last frontier of science. If IIT continues to guide us in the right direction, we’ll develop better methods of diagnosing disorders of consciousness. One day, we might even be able to turn to artificial intelligences – potential minds unlike our own – and assess whether or not they are conscious. This isn’t science fiction: many serious thinkers – including the late physicist Stephen Hawking, the technology entrepreneur Elon Musk, the computer scientist Stuart Russell at the University of California, Berkeley and the philosopher Nick Bostrom at the Future of Humanity Institute in Oxford – take recent advances in AI seriously, and are deeply concerned about the existential risk that could be posed by human- or superhuman-level AI in the future. When is unplugging an AI ethical? Whoever pulls the plug on the super AI of coming decades will want to know, however urgent their actions, whether there truly is an artificial mind slipping into darkness or just a complicated digital computer making sounds that mimic fear.

Wednesday, July 8, 2020

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines (2020). 
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Conclusion

We have argued that to be able to contribute to pressing practical problems, the debate on AMA should be redirected to address outright normative ethical questions. Specifically, the questions of how and to what extent artificial entities should be involved in human practices where we normally assume moral agency and responsibility. The reason for our proposal is the high degree of conceptual confusion and lack of practical usefulness of the traditional AMA debate. And this reason seems especially strong in light of the current fast development and implementation of advanced, autonomous and self-evolving AI and robotic constructs.

Friday, January 17, 2020

Consciousness is real

Image result for consciousnessMassimo Pigliucci
aeon.com
Originally published 16 Dec 19

Here is an excerpt:

Here is where the fundamental divide in philosophy of mind occurs, between ‘dualists’ and ‘illusionists’. Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science, as David Chalmers has been arguing for most of his career, for instance in his book The Conscious Mind (1996).

By embracing the antiscientific position, Chalmers & co are forced to go dualist. Dualism is the notion that physical and mental phenomena are somehow irreconcilable, two different kinds of beasts, so to speak. Classically, dualism concerns substances: according to René Descartes, the body is made of physical stuff (in Latin, res extensa), while the mind is made of mental stuff (in Latin, res cogitans). Nowadays, thanks to our advances in both physics and biology, nobody takes substance dualism seriously anymore. The alternative is something called property dualism, which acknowledges that everything – body and mind – is made of the same basic stuff (quarks and so forth), but that this stuff somehow (notice the vagueness here) changes when things get organised into brains and special properties appear that are nowhere else to be found in the material world. (For more on the difference between property and substance dualism, see Scott Calef’s definition.)

The ‘illusionists’, by contrast, take the scientific route, accepting physicalism (or materialism, or some other similar ‘ism’), meaning that they think – with modern science – not only that everything is made of the same basic kind of stuff, but that there are no special barriers separating physical from mental phenomena. However, since these people agree with the dualists that phenomenal consciousness seems to be spooky, the only option open to them seems to be that of denying the existence of whatever appears not to be physical. Hence the notion that phenomenal consciousness is a kind of illusion.

The essay is here.

Tuesday, December 17, 2019

We Might Soon Build AI Who Deserve Rights

Image result for robot rightsEric Schweitzengebel
Splintered Mind Blog
From a Talk at Notre Dame
Originally posted 17 Nov 19

Abstract

Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.

(cut)

But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

The blog post is here.

Thursday, December 12, 2019

Donald Hoffman: The Case Against Reality

The Institute of Arts and Ideas
Originally published September 8, 2019


Many scientists believe that natural selection brought our perception of reality into clearer and deeper focus, reasoning that growing more attuned to the outside world gave our ancestors an evolutionary edge. Donald Hoffman, a cognitive scientist at the University of California, Irvine, thinks that just the opposite is true. Because evolution selects for survival, not accuracy, he proposes that our conscious experience masks reality behind millennia of adaptions for ‘fitness payoffs’ – an argument supported by his work running evolutionary game-theory simulations. In this interview recorded at the HowTheLightGetsIn Festival from the Institute of Arts and Ideas in 2019, Hoffman explains why he believes that perception must necessarily hide reality for conscious agents to survive and reproduce. With that view serving as a springboard, the wide-ranging discussion also touches on Hoffman’s consciousness-centric framework for reality, and its potential implications for our everyday lives.

Editor Note: If you work as a mental health professional, this video may be helpful in understanding perceptions, understanding self, and consciousness.

Thursday, December 5, 2019

Galileo’s Big Mistake

Galileo's Big MistakePhilip Goff
Scientific American Blog
Originally posted November 7, 2019

Here is an excerpt:

Galileo, as it were, stripped the physical world of its qualities; and after he’d done that, all that remained were the purely quantitative properties of matter—size, shape, location, motion—properties that can be captured in mathematical geometry. In Galileo’s worldview, there is a radical division between the following two things:
  • The physical world with its purely quantitative properties, which is the domain of science,
  • Consciousness, with its qualities, which is outside of the domain of science.
It was this fundamental division that allowed for the possibility of mathematical physics: once the qualities had been removed, all that remained of the physical world could be captured in mathematics. And hence, natural science, for Galileo, was never intended to give us a complete description of reality. The whole project was premised on setting qualitative consciousness outside of the domain of science.

What do these 17th century discussions have to do with the contemporary science of consciousness? It is now broadly agreed that consciousness poses a very serious challenge for contemporary science. Despite rapid progress in our understanding of the brain, we still have no explanation of how complex electrochemical signaling could give rise to a subjective inner world of colors, sounds, smells and tastes.

Although this problem is taken very seriously, many assume that the way to deal with this challenge is simply to continue with our standard methods for investigating the brain. The great success of physical science in explaining more and more of our universe ought to give us confidence, it is thought, that physical science will one day crack the puzzle of consciousness.

The blog post is here.

Monday, November 11, 2019

Why a computer will never be truly conscious

Subhash Kak
The Conversation
Originally published October 16, 2019

Here is an excerpt:

Brains don’t operate like computers

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work.

The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.

Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture.

The info is here.