Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Consciousness. Show all posts
Showing posts with label Consciousness. Show all posts

Friday, March 15, 2024

The consciousness wars: can scientists ever agree on how the mind works?

Mariana Lenharo

Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything.

The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration.

Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between.

The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects.

The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room.
---------------
Here's what the article highlights:
  • Divisions abound: Researchers disagree on the very definition of consciousness, making comparisons between theories difficult. Some focus on subjective experience, while others look at the brain's functions.
  • Testing head-to-head: New research projects are directly comparing competing theories to see which one explains experimental data better. This could be a step towards finding a unifying explanation.
  • Heated debate: The recent critique of one prominent theory, Integrated Information Theory (IIT), shows the depth of the disagreements. Some question its scientific validity, while others defend it as a viable framework.
  • Hope for progress: Despite the disagreements, there's optimism. New research methods and a younger generation of researchers focused on collaboration could lead to breakthroughs in understanding this elusive phenomenon.

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Saturday, January 13, 2024

Consciousness does not require a self

James Coook
iai.tv
Originally published 14 DEC 23

Here is an excerpt:

Beyond the neuroscientific study of consciousness, phenomenological analysis also reveals the self to not be the possessor of experience. In mystical experiences induced by meditation or psychedelics, individuals typically enter a mode of experience in which the psychological self is absent, yet consciousness remains. While this is not the default state of the mind, the presence of consciousness in the absence of a self shows that consciousness is not dependent on an experiencing subject. What is consciousness if not a capacity of an experiencing subject? Such an experience reveals consciousness to consist of a formless awareness at its core, an empty space in which experience arises, including the experience of being a self. The self does not possess consciousness, consciousness is the experiential space in which the image of a psychological self can appear. This mode of experience can be challenging to conceptualise but is very simple when experienced – it is a state of simple appearances arising without the extra add-on of a psychological self inspecting them.

We can think of a conscious system as a system that is capable of holding beliefs about the qualitative character of the world. We should not think of belief here as referring to complex conceptual beliefs, such as believing that Paris is the capital of France, but as the simple ability to hold that the world is a certain way. You do this when you visually perceive a red apple in front of you, the experience is one of believing the apple to exist with all of its qualities such as roundness and redness. This way of thinking is in line with the work of Immanuel Kant, who argued that we never come to know reality as it is but instead only experience phenomenal representations of reality [9]. We are not conscious of the world as it is, but as we believe it to be.


Here is my take:

For centuries, we've assumed consciousness and the sense of self are one and the same. This article throws a wrench in that assumption, proposing that consciousness can exist without a self. Imagine experiencing sights, sounds, and sensations without the constant "me" narrating it all. That's what "selfless consciousness" means – raw awareness untouched by self-reflection.

The article then posits that our familiar sense of self, complete with its stories and memories, isn't some fundamental truth but rather a clever prediction concocted by our brains. This "predicted self" helps us navigate the world and interact with others, but it's not necessarily who we truly are.

Decoupling consciousness from the self opens a Pandora's box of possibilities. We might find consciousness in unexpected places, like animals or even artificial intelligence. Understanding brain function could shift dramatically, and our very notions of identity, free will, and reality might need a serious rethink. This is a bold new perspective on what it means to be conscious, and its implications are quite dramatic.

Friday, December 22, 2023

Differential cortical network engagement during states of un/consciousness in humans

Zelmann, R., Paulk, A., et al. (2023).
Neuron. Volume 111, (21)

Summary

What happens in the human brain when we are unconscious? Despite substantial work, we are still unsure which brain regions are involved and how they are impacted when consciousness is disrupted. Using intracranial recordings and direct electrical stimulation, we mapped global, network, and regional involvement during wake vs. arousable unconsciousness (sleep) vs. non-arousable unconsciousness (propofol-induced general anesthesia). Information integration and complex processing we`re reduced, while variability increased in any type of unconscious state. These changes were more pronounced during anesthesia than sleep and involved different cortical engagement. During sleep, changes were mostly uniformly distributed across the brain, whereas during anesthesia, the prefrontal cortex was the most disrupted, suggesting that the lack of arousability during anesthesia results not from just altered overall physiology but from a disconnection between the prefrontal and other brain areas. These findings provide direct evidence for different neural dynamics during loss of consciousness compared with loss of arousability.

Highlights

• Decreased complexity and connectivity, with increased variability when unconscious
• Changes were more pronounced during propofol-induced general anesthesia than sleep
• During sleep, changes were homogeneously distributed across the human brain
• During anesthesia, substantial prefrontal disconnection is related to lack of arousability


Here is my summary:

State-Dependent Cortical Network Engagement

The human brain undergoes significant changes in its functional organization during different states of consciousness, including wakefulness, sleep, and general anesthesia. This study investigated the neural underpinnings of these state-dependent changes by comparing cortical network engagement during wakefulness, sleep, and propofol-induced general anesthesia.

Prefrontal Cortex Disruption during Anesthesia

The findings revealed that loss of consciousness, whether due to sleep or anesthesia, resulted in reduced information integration and increased response variability compared to wakefulness. However, these changes were more pronounced during anesthesia than sleep. Notably, anesthesia was associated with a specific disruption of the prefrontal cortex (PFC), a brain region crucial for higher-order cognitive functions such as decision-making and self-awareness.

Implications for Understanding Consciousness

These findings suggest that the PFC plays a critical role in maintaining consciousness and that its disruption contributes to the loss of consciousness during anesthesia. The study also highlights the distinct neural mechanisms underlying sleep and anesthesia, suggesting that these states involve different modes of brain function.

Thursday, November 16, 2023

Minds of machines: The great AI consciousness conundrum

Grace Huckins
MIT Technology Review
Originally published 16 October 23

Here is an excerpt:

At the breakneck pace of AI development, however, things can shift suddenly. For his mathematically minded audience, Chalmers got concrete: the chances of developing any conscious AI in the next 10 years were, he estimated, above one in five.

Not many people dismissed his proposal as ridiculous, Chalmers says: “I mean, I’m sure some people had that reaction, but they weren’t the ones talking to me.” Instead, he spent the next several days in conversation after conversation with AI experts who took the possibilities he’d described very seriously. Some came to Chalmers effervescent with enthusiasm at the concept of conscious machines. Others, though, were horrified at what he had described. If an AI were conscious, they argued—if it could look out at the world from its own personal perspective, not simply processing inputs but also experiencing them—then, perhaps, it could suffer.

AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”


Here is my take.

There is an ongoing debate about whether artificial intelligence can ever become conscious or have subjective experiences like humans. Some argue AI will inevitably become conscious as it advances, while others think consciousness requires biological qualities that AI lacks.

Philosopher David Chalmers has proposed a "hard problem of consciousness" - explaining how physical processes in the brain give rise to subjective experience. This issue remains unresolved.

AI systems today show no signs of being conscious or having experiences. But some argue as AI becomes more sophisticated, we may need to consider whether it could develop some level of consciousness.
Approaches like deep learning and neural networks are fueling major advances in narrow AI, but this type of statistical pattern recognition does not seem sufficient to produce consciousness.

Questions remain about whether artificial consciousness is possible or how we could detect if an AI system were to become conscious. There are also ethical implications regarding the rights of conscious AI.

Overall there is much speculation but no consensus on whether artificial general intelligence could someday become conscious like humans are. The answer awaits theoretical and technological breakthroughs.

Saturday, October 28, 2023

Meaning from movement and stillness: Signatures of coordination dynamics reveal infant agency

Sloan, A. T., Jones, N. A., et al. (2023).
PNAS, 120 (39) e2306732120

Abstract

How do human beings make sense of their relation to the world and realize their ability to effect change? Applying modern concepts and methods of coordination dynamics, we demonstrate that patterns of movement and coordination in 3 to 4-mo-olds may be used to identify states and behavioral phenotypes of emergent agency. By means of a complete coordinative analysis of baby and mobile motion and their interaction, we show that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

Significance

Revamping one of the earliest paradigms for the investigation of infant learning, and moving beyond reinforcement accounts, we show that the emergence of agency in infants can take the form of a bifurcation or phase transition in a dynamical system that spans the baby, the brain, and the environment. Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist—and dynamics provides a means to identify them. This phenotyping method may be useful for identifying babies at risk.

Here is my take:

Importantly, researchers found that the emergence of agency can take the form of a punctuated self-organizing process, with meaning found both in movement and stillness.

The findings of this study suggest that infants are not simply passive observers of the world around them, but rather active participants in their own learning and development. The researchers believe that their work could have implications for the early identification of infants at risk for developmental delays.

Here are some of the key takeaways from the study:
  • Infants learn to make sense of their relation to the world through their movement and interaction with their environment.
  • The emergence of agency is a punctuated, self-organizing process that occurs in both movement and stillness.
  • Individual infants navigate functional coupling with the world in different ways, suggesting that behavioral phenotypes of agentive discovery exist.
  • Dynamics provides a means to identify behavioral phenotypes of agentive discovery, which may be useful for identifying babies at risk.
  • This study is a significant contribution to our understanding of how infants learn and develop. It provides new insights into the role of movement and stillness in the emergence of agency and consciousness. The findings of this study have the potential to improve our ability to identify and support infants at risk for developmental delays.

Friday, October 27, 2023

Theory of consciousness branded 'pseudoscience' by neuroscientists

Clare Wilson
New Scientist
Originally posted 19 Sept 23

Consciousness is one of science’s deepest mysteries; it is considered so difficult to explain how physical entities like brain cells produce subjective sensory experiences, such as the sensation of seeing the colour red, that this is sometimes called “the hard problem” of science.

While the question has long been investigated by studying the brain, IIT came from considering the mathematical structure of information-processing networks and could also apply to animals or artificial intelligence.

It says that a network or system has a higher level of consciousness if it is more densely interconnected, such that the interactions between its connection points or nodes yield more information than if it is reduced to its component parts.

IIT predicts that it is theoretically possible to calculate a value for the level of consciousness, termed phi, of any network with known structure and functioning. But as the number of nodes within a network grows, the sums involved get exponentially bigger, meaning that it is practically impossible to calculate phi for the human brain – or indeed any information-processing network with more than about 10 nodes.

(cut)

Giulio Tononi at the University of Wisconsin-Madison, who first developed IIT and took part in the recent testing, did not respond to New Scientist’s requests for comment. But Johannes Fahrenfort at VU Amsterdam in the Netherlands, who was not involved in the recent study, says the letter went too far. “There isn’t a lot of empirical support for IIT. But that doesn’t warrant calling it pseudoscience.”

Complicating matters, there is no single definition of pseudoscience. But ITT is not in the same league as astrology or homeopathy, says James Ladyman at the University of Bristol in the UK. “It looks like a serious attempt to understand consciousness. It doesn’t make it a theory pseudoscience just because some people are making exaggerated claims.”


Summary:

A group of 124 neuroscientists, including prominent figures in the field, have criticized the integrated information theory (IIT) of consciousness in an open letter. They argue that recent experimental evidence said to support IIT didn't actually test its core ideas and is practically impossible to perform. IIT suggests that the level of consciousness, called "phi," can be calculated for any network with known structure and functioning, but this becomes impractical for networks with many nodes, like the human brain. Some critics believe that IIT has been overhyped and may have unintended consequences for policies related to consciousness in fetuses and animals. However, not all experts consider IIT pseudoscience, with some seeing it as a serious attempt to understand consciousness.

The debate surrounding the integrated information theory (IIT) of consciousness is a complex one. While it's clear that the recent experimental evidence has faced criticism for not directly testing the core ideas of IIT, it's important to recognize that the study of consciousness is a challenging and ongoing endeavor.

Consciousness is indeed one of science's profound mysteries, often referred to as "the hard problem." IIT, in its attempt to address this problem, has sparked valuable discussions and research. It may not be pseudoscience, but the concerns raised about overhyping its findings are valid. It's crucial for scientific theories to be communicated accurately to avoid misinterpretation and potential policy implications.

Ultimately, the study of consciousness requires a multidisciplinary approach and the consideration of various theories, and it's important to maintain a healthy skepticism while promoting rigorous scientific inquiry in this complex field.

Wednesday, September 6, 2023

Could a Large Language Model Be Conscious?

David Chalmers
Boston Review
Originally posted 9 Aug 23

Here are two excerpts:

Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

This gives rise to a second challenge: Should we create conscious AI? This is a major ethical challenge for the community. The question is important and the answer is far from obvious.

We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability. If conscious AI is coming somewhere down the line, then that will raise a new group of difficult ethical challenges, with the potential for new forms of injustice added on top of the old ones. One issue is that conscious AI could well lead to new harms toward humans. Another is that it could lead to new harms toward AI systems themselves.

I’m not an ethicist, and I won’t go deeply into the ethical questions here, but I don’t take them lightly. I don’t want the roadmap to conscious AI that I’m laying out here to be seen as a path that we have to go down. The challenges I’m laying out in what follows could equally be seen as a set of red flags. Each challenge we overcome gets us closer to conscious AI, for better or for worse. We need to be aware of what we’re doing and think hard about whether we should do it.

(cut)

Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.


Here are some of the key points from the article:
  1. There is no consensus on what consciousness is, so it is difficult to say definitively whether or not LLMs are conscious.
  2. Some people believe that consciousness requires carbon-based biology, but Chalmers argues that this is a form of biological chauvinism.  I agree with this completely. We can have synthetic forms of consciousness.
  3. Other people believe that LLMs are not conscious because they lack sensory processing or bodily embodiment. Chalmers argues that these objections are not decisive, but they do raise important questions about the nature of consciousness.
  4. Chalmers concludes by suggesting that we should take the possibility of LLM consciousness seriously, but that we should also be cautious about making definitive claims about it.

Monday, June 5, 2023

Why Conscious AI Is a Bad, Bad Idea

Anil Seth
Nautilus.us
Originally posted 9 MAY 23

Artificial intelligence is moving fast. We can now converse with large language models such as ChatGPT as if they were human beings. Vision models can generate award-winning photographs as well as convincing videos of events that never happened. These systems are certainly getting smarter, but are they conscious? Do they have subjective experiences, feelings, and conscious beliefs in the same way that you and I do, but tables and chairs and pocket calculators do not? And if not now, then when—if ever—might this happen?

While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether. The prospect of artificial consciousness raises ethical, safety, and societal challenges significantly beyond those already posed by AI. Importantly, some of these challenges arise even when AI systems merely seem to be conscious, even if, under the hood, they are just algorithms whirring away in subjective oblivion.

(cut)

There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility. Certainly, nobody should be actively trying to create machine consciousness.

Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism—putting ourselves at the center of everything—and anthropomorphism—projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

Sunday, May 14, 2023

Consciousness begins with feeling, not thinking

A. Damasio & H. Dimasio
iai.tv
Originally posted 20 APR 23

Please pause for a moment and notice what you are feeling now. Perhaps you notice a growing snarl of hunger in your stomach or a hum of stress in your chest. Perhaps you have a feeling of ease and expansiveness, or the tingling anticipation of a pleasure soon to come. Or perhaps you simply have a sense that you exist. Hunger and thirst, pain, pleasure and distress, along with the unadorned but relentless feelings of existence, are all examples of ‘homeostatic feelings’. Homeostatic feelings are, we argue here, the source of consciousness.

In effect, feelings are the mental translation of processes occurring in your body as it strives to balance its many systems, achieve homeostasis, and keep you alive. In a conventional sense feelings are part of the mind and yet they offer something extra to the mental processes. Feelings carry spontaneously conscious knowledge concerning the current state of the organism as a result of which you can act to save your life, such as when you respond to pain or thirst appropriately. The continued presence of feelings provides a continued perspective over the ongoing body processes; the presence of feelings lets the mind experience the life process along with other contents present in your mind, namely, the relentless perceptions that collect knowledge about the world along with reasonings, calculations, moral judgments, and the translation of all these contents in language form. By providing the mind with a ‘felt point of view’, feelings generate an ‘experiencer’, usually known as a self. The great mystery of consciousness in fact is the mystery behind the biological construction of this experiencer-self.

In sum, we propose that consciousness is the result of the continued presence of homeostatic feelings. We continuously experience feelings of one kind or another, and feelings naturally tell each of us, automatically, not only that we exist but that we exist in a physical body, vulnerable to discomfort yet open to countless pleasures as well. Feelings such as pain or pleasure provide you with consciousness, directly; they provide transparent knowledge about you. They tell you, in no uncertain terms, that you exist and where you exist, and point to what you need to do to continue existing – for example, treating pain or taking advantage of the well-being that came your way. Feelings illuminate all the other contents of mind with the light of consciousness, both the plain events and the sublime ideas. Thanks to feelings, consciousness fuses the body and mind processes and gives our selves a home inside that partnership.

That consciousness should come ‘down’ to feelings may surprise those who have been led to associate consciousness with the lofty top of the physiological heap. Feelings have been considered inferior to reason for so long that the idea that they are not only the noble beginning of sentient life but an important governor of life’s proceedings may be difficult to accept. Still, feelings and the consciousness they beget are largely about the simple but essential beginnings of sentient life, a life that is not merely lived but knows that it is being lived.

Sunday, February 12, 2023

The scientific study of consciousness cannot, and should not, be morally neutral

Mazor, M., Brown, S., et al. (2021, November 12). 
Perspectives on psychological science.
Advance online publication.

Abstract

A target question for the scientific study of consciousness is how dimensions of consciousness, such as the ability to feel pain and pleasure or reflect on one’s own experience, vary in different states and animal species. Considering the tight link between consciousness and moral status, answers to these questions have implications for law and ethics. Here we point out that given this link, the scientific community studying consciousness may face implicit pressure to carry out certain research programmes or interpret results in ways that justify current norms rather than challenge them. We show that since consciousness largely determines moral status, the use of non-human animals in the scientific study of consciousness introduces a direct conflict between scientific relevance and ethics – the more scientifically valuable an animal model is for studying consciousness, the more difficult it becomes to ethically justify compromises to its well-being for consciousness research. Lastly, in light of these considerations, we call for a discussion of the immediate ethical corollaries of the body of knowledge that has accumulated, and for a more explicit consideration of the role of ideology and ethics in the scientific study of consciousness.

Here is how the article ends:

Finally, we believe consciousness researchers, including those working only with consenting humans, should take an active role in the ethical discussion about these issues, including the use of animal models for the study of consciousness. Studying consciousness, the field has the responsibility of leading the way on these ethical questions and of making strong statements when such statements are justified by empirical findings. Recent examples include discussions of ethical ramifications of neuronal signs of fetal consciousness (Lagercrantz, 2014) and a consolidation of evidence for consciousness in vertebrate animals, with a focus on livestock species, ordered by the European Food and Safety Authority (Le Neindre et al., 2017). In these cases, the science of consciousness provided empirical evidence to weigh on whether a fetus or a livestock animal is conscious. The question of animal models of consciousness is simpler because the presence of consciousness is a prerequisite for the model to be valid. Here, researchers can skip the difficult question of whether the entity is indeed conscious and directly ask, “Do we believe that consciousness, or some specific form or dimension of consciousness, entails moral status?”

It is useful to remind ourselves that ethical beliefs and practices are dynamic: Things that were considered
acceptable in the past are no longer acceptable today.  A relatively recent change is that to the status of nonhuman great apes (gorillas, bonobos, chimpanzees, and orangutans) such that research on great apes is banned in some countries today, including all European Union member states and New Zealand. In these countries, drilling a hole in chimpanzees’ heads, keeping them in isolation, or restricting their access to drinking water are forbidden by law. It is a fundamental question of the utmost importance which differences between animals make some practices acceptable with respect to some animals and not others. If consciousness is a determinant of moral status, consciousness researchers have a responsibility in taking an active part in this discussion—by providing scientific observations that either justify current ethical standards or induce the scientific and legal communities to revise these standards.

Thursday, December 22, 2022

In the corner of an Australian lab, a brain in a dish is playing a video game - and it’s getting better

Liam Mannix
Sydney Morning Herald
Originally posted 13 NOV 22

Here is an excerpt:

Artificial intelligence controls an ever-increasing slice of our lives. Smart voice assistants hang on our every word. Our phones leverage machine learning to recognise our face. Our social media lives are controlled by algorithms that surface content to keep us hooked.

These advances are powered by a new generation of AIs built to resemble human brains. But none of these AIs are really intelligent, not in the human sense of the word. They can see the superficial pattern without understanding the underlying concept. Siri can read you the weather but she does not really understand that it’s raining. AIs are good at learning by rote, but struggle to extrapolate: even teenage humans need only a few sessions behind the wheel before the can drive, while Google’s self-driving car still isn’t ready after 32 billion kilometres of practice.

A true ‘general artificial intelligence’ remains out of reach - and, some scientists think, impossible.

Is this evidence human brains can do something special computers never will be able to? If so, the DishBrain opens a new path forward. “The only proof we have of a general intelligence system is done with biological neurons,” says Kagan. “Why would we try to mimic what we could harness?”

He imagines a future part-silicon-part-neuron supercomputer, able to combine the raw processing power of silicon with the built-in learning ability of the human brain.

Others are more sceptical. Human intelligence isn’t special, they argue. Thoughts are just electro-chemical reactions spreading across the brain. Ultimately, everything is physics - we just need to work out the maths.

“If I’m building a jet plane, I don’t need to mimic a bird. It’s really about getting to the mathematical foundations of what’s going on,” says Professor Simon Lucey, director of the Australian Institute for Machine Learning.

Why start the DishBrains on Pong? I ask. Because it’s a game with simple rules that make it ideal for training AI. And, grins Kagan, it was one of the first video game ever coded. A nod to the team’s geek passions - which run through the entire project.

“There’s a whole bunch of sci-fi history behind it. The Matrix is an inspiration,” says Chong. “Not that we’re trying to create a Matrix,” he adds quickly. “What are we but just a goooey soup of neurons in our heads, right?”

Maybe. But the Matrix wasn’t meant as inspiration: it’s a cautionary tale. The humans wired into it existed in a simulated reality while machines stole their bioelectricity. They were slaves.

Is it ethical to build a thinking computer and then restrict its reality to a task to be completed? Even if it is a fun task like Pong?

“The real life correlate of that is people have already created slaves that adore them: they are called dogs,” says Oxford University’s Julian Savulescu.

Thousands of years of selective breeding has turned a wild wolf into an animal that enjoys rounding up sheep, that loves its human master unconditionally.

Thursday, November 17, 2022

The Scientific Study of Consciousness Cannot and Should Not Be Morally Neutral

Mazor, M., Brown, S., Ciaunica, A., et al. (2022)
Perspectives on Psychological Science, 0(0).

Abstract

A target question for the scientific study of consciousness is how dimensions of consciousness, such as the ability to feel pain and pleasure or reflect on one’s own experience, vary in different states and animal species. Considering the tight link between consciousness and moral status, answers to these questions have implications for law and ethics. Here we point out that given this link, the scientific community studying consciousness may face implicit pressure to carry out certain research programs or interpret results in ways that justify current norms rather than challenge them. We show that because consciousness largely determines moral status, the use of nonhuman animals in the scientific study of consciousness introduces a direct conflict between scientific relevance and ethics—the more scientifically valuable an animal model is for studying consciousness, the more difficult it becomes to ethically justify compromises to its well-being for consciousness research. Finally, in light of these considerations, we call for a discussion of the immediate ethical corollaries of the body of knowledge that has accumulated and for a more explicit consideration of the role of ideology and ethics in the scientific study of consciousness.

(cut)

The animal-models-of-consciousness paradox

An instance in which the scientific community has failed to acknowledge the intimate link between consciousness and ethics is in the use of animal models of consciousness. Our focus here is on the use of animals that are assumed to be conscious as an opportunity to probe the underlying mechanisms of consciousness in ways that would not be ethically acceptable with human subjects. In such studies, animals are often captive and deprived of basic needs and undergo invasive procedures. At the same time, for these animals to be appropriate models for the study of consciousness, it has to be assumed that they are conscious. Because conscious capacities play a pivotal role in the attribution of moral status to animals, in these experiments, scientific validity and moral justification are in direct conflict. This conflict is particularly acute in the study of consciousness and subjective experience: That an animal is an adequate model for the study of consciousness makes it more likely to be capable of experiencing rich phenomenal states, self-awareness, or suffering and to have its life considered to be deserving of appropriate protection much more than being an appropriate model for the study of the immune system does.

In a recent study of the neural correlates of consciousness, researchers contrasted brain activation in awake, sleeping, and anesthetized macaque monkeys (Redinbaugh et al., 2020). For this study, two monkeys were kept in captivity, implanted with brain electrodes, and immobilized by sticking rods in a head implant during electrophysiological recordings. In another study from 2021, a behavioral measure of conscious awareness was reported in four caged rhesus monkeys (Ben-Haim et al., 2021). Scientists surgically implanted subjects with a metal extension to their skull for the purpose of restraining movement during experimental sessions and restricted subjects’ access to water at testing so that they were motivated to participate in the task for juice droplets. In a study from 2019 on the neural basis of introspection, researchers abolished parts of the prefrontal cortex of six caged macaque monkeys, which were killed at the end of the study (Kwok et al., 2019). In another study published in Science in 2020 (Nieder et al., 2020), a neural correlate of sensory consciousness was demonstrated in the brains of two male crows by implanting electrodes in their brains. These are mere examples of typical research practice in the field of invasive electrophysiology that conform with current ethical guidelines in place at a national level and are commonplace in many fields of study. Yet common to these studies is that their scientific relevance rests on the animal being conscious, whereas their ethical justification rests on the animal not deserving the same protection from suffering as a human subject.

Sunday, October 9, 2022

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines 30, 195–218 (2020).
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Free will and Autonomy

Several AMA debaters have claimed that free will is necessary for being a moral agent (Himma 2009; Hellström 2012; Friedman and Kahn 1992). Others make a similar (and perhaps related) claim that autonomy is necessary (Lin et al. 2008; Schulzke 2013). In the AMA debate, some argue that artificial entities can never have free will (Bringsjord 1992; Shen 2011; Bringsjord 2007) while others, like James Moor (2006, 2009), are open to the possibility that future machines might acquire free will.Footnote15 Others (Powers 2006; Tonkens 2009) have proposed that the plausibility of a free will condition on moral agency may vary depending on what type of normative ethical theory is assumed, but they have not developed this idea further.

Despite appealing to the concept of free will, this portion of the AMA debate does not engage with key problems in the free will literature, such as the debate about compatibilism and incompatibilism (O’Connor 2016). Those in the AMA debate assume the existence of free will among humans, and ask whether artificial entities can satisfy a source control condition (McKenna et al. 2015). That is, the question is whether or not such entities can be the origins of their actions in a way that allows them to control what they do in the sense assumed of human moral agents.

An exception to this framing of the free will topic in the AMA debate occurs when Johnson writes that ‘… the non-deterministic character of human behavior makes it somewhat mysterious, but it is only because of this mysterious, non-deterministic aspect of moral agency that morality and accountability are coherent’ (Johnson 2006 p. 200). This is a line of reasoning that seems to assume an incompatibilist and libertarian sense of free will, assuming both that it is needed for moral agency and that humans do possess it. This, of course, makes the notion of human moral agents vulnerable to standard objections in the general free will debate (Shaw et al. 2019). Additionally, we note that Johnson’s idea about the presence of a ‘mysterious aspect’ of human moral agents might allow for AMA in the same way as Dreyfus and Hubert’s reference to the subconscious: artificial entities may be built to incorporate this aspect.

The question of sourcehood in the AMA debate connects to the independence argument: For instance, when it is claimed that machines are created for a purpose and therefore are nothing more than advanced tools (Powers 2006; Bryson 2010; Gladden 2016) or prosthetics (Johnson and Miller 2008), this is thought to imply that machines can never be the true or genuine source of their own actions. This argument questions whether the independence required for moral agency (by both functionalists and standardists) can be found in a machine. If a machine’s repertoire of behaviors and responses is the result of elaborate design then it is not independent, the argument goes. Floridi and Sanders question this proposal by referring to the complexity of ‘human programming’, such as genes and arranged environmental factors (e.g. education). 

Friday, July 1, 2022

Tech firms are making computer chips with human cells – is it ethical?

J. Savulescu, C. Gyngell, & T. Sawai
The Conversation
Originally published 24 MAY 22

Here is an excerpt:

While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second.

This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres.

Companies do not need brain tissue samples from donors, but can simply grow the neurons they need in the lab from ordinary skin cells using stem cell technologies. Scientists can engineer cells from blood samples or skin biopsies into a type of stem cell that can then become any cell type in the human body.

However, this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?

People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?

If neural computers become common, we will grapple with other tissue donation issues. In Cortical Lab’s research with Dishbrain, they found human neurons were faster at learning than neurons from mice. Might there also be differences in performance depending on whose neurons are used? Might Apple and Google be able to make lightning-fast computers using neurons from our best and brightest today? Would someone be able to secure tissues from deceased genius’s like Albert Einstein to make specialised limited-edition neural computers?

Such questions are highly speculative but touch on broader themes of exploitation and compensation. Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.

Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?

Wednesday, June 1, 2022

The ConTraSt database for analysing and comparing empirical studies of consciousness theories

Yaron, I., Melloni, L., Pitts, M. et al.
Nat Hum Behav (2022).
https://doi.org/10.1038/s41562-021-01284-5

Abstract

Understanding how consciousness arises from neural activity remains one of the biggest challenges for neuroscience. Numerous theories have been proposed in recent years, each gaining independent empirical support. Currently, there is no comprehensive, quantitative and theory-neutral overview of the field that enables an evaluation of how theoretical frameworks interact with empirical research. We provide a bird’s eye view of studies that interpreted their findings in light of at least one of four leading neuroscientific theories of consciousness (N = 412 experiments), asking how methodological choices of the researchers might affect the final conclusions. We found that supporting a specific theory can be predicted solely from methodological choices, irrespective of findings. Furthermore, most studies interpret their findings post hoc, rather than a priori testing critical predictions of the theories. Our results highlight challenges for the field and provide researchers with an open-access website (https://ContrastDB.tau.ac.il) to further analyse trends in the neuroscience of consciousness.

Discussion

Several key conclusions can be drawn from our analyses of these 412 experiments: First, the field seems highly skewed towards confirmatory, as opposed to disconfirmatory, evidence which might explain the failure to exclude theories and converge on an accepted, or at least widely favored, account. This effect is relatively stable over time. Second, theory-driven studies, aimed at testing the predictions of the theories, are rather scarce, and even rarer are studies testing more than one theory, or pitting theories against each other – only 7% of the experiments directly compared two or more theories’ predictions. Though there seems to be an increasing number of experiments that test predictions a-priori in recent years, a large number of studies continue to interpret their findings post-hoc in light of the theories. Third, a close
relation was found between methodological choices made by researchers and the theoretical interpretations of their findings. That is, based only on some methodological choices of the researchers (e.g., using report vs. no-report paradigms, or studying content vs. state consciousness), we could predict if the experiment will end up supporting each of the theories.


Editor's note: Consistent with other forms of confirmation bias: the design of the experiment largely determines its result.  Consciousness remains a mystery, and in the eye of the scientific beholder.

Tuesday, May 10, 2022

Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness

Anthis, J.R. (2022). 
In: Klimov, V.V., Kelley, D.J. (eds) Biologically 
Inspired Cognitive Architectures 2021. BICA 2021. 
Studies in Computational Intelligence, vol 1032. 
Springer, Cham. 
https://doi.org/10.1007/978-3-030-96993-6_3

Abstract

Many philosophers and scientists claim that there is a ‘hard problem of consciousness’, that qualia, phenomenology, or subjective experience cannot be fully understood with reductive methods of neuroscience and psychology, and that there is a fact of the matter as to ‘what it is like’ to be conscious and which entities are conscious. Eliminativism and related views such as illusionism argue against this. They claim that consciousness does not exist in the ways implied by everyday or scholarly language. However, this debate has largely consisted of each side jousting analogies and intuitions against the other. Both sides remain unconvinced. To break through this impasse, I present consciousness semanticism, a novel eliminativist theory that sidesteps analogy and intuition. Instead, it is based on a direct, formal argument drawing from the tension between the vague semantics in definitions of consciousness such as ‘what it is like’ to be an entity and the precise meaning implied by questions such as, ‘Is this entity conscious?’ I argue that semanticism naturally extends to erode realist notions of other philosophical concepts, such as morality and free will. Formal argumentation from precise semantics exposes these as pseudo-problems and eliminates their apparent mysteriousness and intractability.

From Implications and Concluding Remarks

Perhaps even more importantly, humanity seems to be rapidly developing the capacity to create vastly more intelligent beings than currently exist. Scientists and engineers have already built artificial intelligences from chess bots to sex bots.  Some projects are already aimed at the organic creation of intelligence, growing increasingly large sections of human brains in the laboratory. Such minds could have something we want to call consciousness, and they could exist in astronomically large numbers. Consider if creating a new conscious being becomes as easy as copying and pasting a computer program or building a new robot in a factory. How will we determine when these creations become conscious or sentient?  When do they deserve legal protection or rights? These are important motivators for the study of consciousness, particularly for the attempt to escape the intellectual quagmire that may have grown from notions such as the ‘hard problem’ and ‘problem of other minds’. Andreotta (2020) argues that the project of ‘AI rights’,  including artificial intelligences in the moral circle, is ‘beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the “Hard Problem” of consciousness’. While the extent of the impediment is unclear, a resolution of the ‘hard problem’ such as the one I have presented could make it easier to extend moral concern to artificial intelligences.

Wednesday, February 16, 2022

AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness

Wiese, W. and Friston, K.J.
Behavioural Brain Research
Volume 420, 26 February 2022, 113704

Abstract

Methods used in artificial intelligence (AI) overlap with methods used in computational psychiatry (CP). Hence, considerations from AI ethics are also relevant to ethical discussions of CP. Ethical issues include, among others, fairness and data ownership and protection. Apart from this, morally relevant issues also include potential transformative effects of applications of AI—for instance, with respect to how we conceive of autonomy and privacy. Similarly, successful applications of CP may have transformative effects on how we categorise and classify mental disorders and mental health. Since many mental disorders go along with disturbed conscious experiences, it is desirable that successful applications of CP improve our understanding of disorders involving disruptions in conscious experience. Here, we discuss prospects and pitfalls of transformative effects that CP may have on our understanding of mental disorders. In particular, we examine the concern that even successful applications of CP may fail to take all aspects of disordered conscious experiences into account.


Highlights

•  Considerations from AI ethics are also relevant to the ethics of computational psychiatry.

•  Ethical issues include, among others, fairness and data ownership and protection.

•  They also include potential transformative effects.

•  Computational psychiatry may transform conceptions of mental disorders and health.

•  Disordered conscious experiences may pose a particular challenge.

From the Discussion

At present, we are far from having a formal account of conscious experience. As mentioned in the introduction, many empirical theories of consciousness make competing claims, and there is still much uncertainty about the neural mechanisms that underwrite ordinary conscious processes (let alone psychopathology). Hence, the suggestion to foster research on the computational correlates of disordered conscious experiences should not be regarded as an invitation to ignore subjective reports. The patient’s perspective will continue to be central for normatively assessing their experienced condition. Computational models offer constructs to better describe and understand elusive aspects of a disordered conscious experience, but the patient will remain the primary authority on whether they are suffering from their condition. 

Saturday, February 5, 2022

Can Brain Organoids Be ‘Conscious’? Scientists May Soon Find Out

Anil Seth
Wired.com
Originally posted 20 DEC 21

Here is an excerpt:

The challenge here is that we are still not sure how to define consciousness in a fully formed human brain, let alone in a small cluster of cells grown in a lab. But there are some promising avenues to explore. One prominent candidate for a brain signature of consciousness is its response to a perturbation. If you stimulate a conscious brain with a pulse of energy, the electrical echo will reverberate in complex patterns over time and space. Do the same thing to an unconscious brain and the echo will be very simple—like throwing a stone into still water. The neuroscientist Marcello Massimini and his team at the University of Milan have used this discovery to detect residual or “covert” consciousness in behaviorally unresponsive patients with severe brain injury. What happens to brain organoids when stimulated this way remains unknown—and it is not yet clear how the results might be interpreted.

As brain organoids develop increasingly similar dynamics to those observed in conscious human brains, we will have to reconsider both what we take to be reliable brain signatures of consciousness in humans, and what criteria we might adopt to ascribe consciousness to something made not born.

The ethical implications of this are obvious. A conscious organoid might consciously suffer and we may never recognize its suffering since it cannot express anything.

Tuesday, January 18, 2022

MIT Researchers Just Discovered an AI Mimicking the Brain on Its Own

Eric James Beyer
Interesting Engineering
Originally posted 18 DEC 21

Here is an excerpt:

In the wake of these successes, Martin began to wonder whether or not the same principle could be applied to higher-level cognitive functions like language processing. 

“I said, let’s just look at neural networks that are successful and see if they’re anything like the brain. My bet was that it would work, at least to some extent.”

To find out, Martin and colleagues compared data from 43 artificial neural network language models against fMRI and ECoG neural recordings taken while subjects listened to or read words as part of a text. The AI models the group surveyed covered all the major classes of available neural network approaches for language-based tasks. Some of them were more basic embedding models like GloVe, which clusters semantically similar words together in groups. Others, like the models known as GPT and BERT, were far more complex. These models are trained to predict the next word in a sequence or predict a missing word within a certain context, respectively. 

“The setup itself becomes quite simple,” Martin explains. “You just show the same stimuli to the models that you show to the subjects [...]. At the end of the day, you’re left with two matrices, and you test if those matrices are similar.”

And the results? 

“I think there are three-and-a-half major findings here,” Schrimpf says with a laugh. “I say ‘and a half’ because the last one we still don’t fully understand.”

Machine learning that mirrors the brain

The finding that sticks out to Martin most immediately is that some of the models predict neural data extremely well. In other words, regardless of how good a model was at performing a task, some of them appear to resemble the brain’s cognitive mechanics for language processing. Intriguingly, the team at MIT identified the GPT model variants as the most brain-like out of the group they looked at.