Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Mind. Show all posts
Showing posts with label Mind. Show all posts

Tuesday, February 6, 2024

Anthropomorphism in AI

Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350


AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.

Here are my thoughts:

Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.

There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.

Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.

However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings.  For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.

This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.

It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.

In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.

Monday, April 11, 2022

Distinct neurocomputational mechanisms support informational and socially normative conformity

Mahmoodi A, Nili H, et al.
(2022) PLoS Biol 20(3): e3001565. 


A change of mind in response to social influence could be driven by informational conformity to increase accuracy, or by normative conformity to comply with social norms such as reciprocity. Disentangling the behavioural, cognitive, and neurobiological underpinnings of informational and normative conformity have proven elusive. Here, participants underwent fMRI while performing a perceptual task that involved both advice-taking and advice-giving to human and computer partners. The concurrent inclusion of 2 different social roles and 2 different social partners revealed distinct behavioural and neural markers for informational and normative conformity. Dorsal anterior cingulate cortex (dACC) BOLD response tracked informational conformity towards both human and computer but tracked normative conformity only when interacting with humans. A network of brain areas (dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (TPJ)) that tracked normative conformity increased their functional coupling with the dACC when interacting with humans. These findings enable differentiating the neural mechanisms by which different types of conformity shape social changes of mind.


A key feature of adaptive behavioural control is our ability to change our mind as new evidence comes to light. Previous research has identified dACC as a neural substrate for changes of mind in both nonsocial situations, such as when receiving additional evidence pertaining to a previously made decision, and social situations, such as when weighing up one’s own decision against the recommendation of an advisor. However, unlike the nonsocial case, the role of dACC in social changes of mind can be driven by different, and often competing, factors that are specific to the social nature of the interaction. In particular, a social change of mind may be driven by a motivation to be correct, i.e., informational influence. Alternatively, a social change of mind may be driven by reasons unrelated to accuracy—such as social acceptance—a process called normative influence. To date, studies on the neural basis of social changes of mind have not disentangled these processes. It has therefore been unclear how the brain tracks and combines informational and normative factors.

Here, we leveraged a recently developed experimental framework that separates humans’ trial-by-trial conformity into informational and normative components to unpack the neural basis of social changes of mind. On each trial, participants first made a perceptual estimate and reported their confidence in it. In support of our task rationale, we found that, while participants’ changes of mind were affected by confidence (i.e., informational) in both human and computer settings, they were only affected by the need to reciprocate influence (i.e., normative) specifically in the human–human setting. It should be noted that participants’ perception of their partners’ accuracy is also an important factor in social change of mind (we tend to change our mind towards the more accurate participants). 

Monday, November 8, 2021

What the mind is

B. F. Malle
Nature - Human Behaviour
Originally published 26 Aug 21

Humans have a conception of what the mind is. This conception takes mind to be a set of capacities, such as the ability to be proud or feel sad, to remember or to plan. Such a multifaceted conception allows people to ascribe mind in varying degrees to humans, animals, robots or any other entity1,2. However, systematic research on this conception of mind has so far been limited to Western populations. A study by Weisman and colleagues3 published in Nature Human Behaviour now provides compelling evidence for some cross-cultural universals in the human understanding of what the mind is, as well as revealing intercultural variation.


As with all new findings, readers must be alert and cautious in the conclusions they draw. We may not conclude with certainty that these are the three definitive dimensions of human mind perception, because the 23 mental capacities featured in the study were not exhaustive; in particular, they did not encompass two important domains — morality and social cognition. Moral capacities are central to social relations, person perception and identity; likewise, people care deeply about the capacity to empathize and understand others’ thoughts and feelings. Yet the present study lacked items to capture these domains. When items for moral and social–cognitive capacities have been included in past US studies, they formed a strong separate dimension, while emotions shifted toward the Experience dimension. 

Incorporating moral–social capacities in future studies may strengthen the authors’ findings. Morality and social cognition are credible candidates for cultural universals, so their inclusion could make cross-cultural stability of mind perception even more decisive. Moreover, inclusion of these important mental capacities might clarify one noteworthy cultural divergence in the data: the fact that adults in Ghana and Vanuatu combined the emotional and perceptual-cognitive dimensions. Without the contrast to social–moral capacities, emotion and cognition might have been similar enough to move toward each other. Including social–moral capacities in future studies could provide a contrasting and dividing line, which would pull emotion and cognition apart. The results might, potentially, be even more consistent across cultures.

Thursday, October 14, 2021

A Minimal Turing Test

McCoy, J. P., and Ullman, T.D.
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 1-8


We introduce the Minimal Turing Test, an experimental paradigm for studying perceptions and meta-perceptions of different social groups or kinds of agents, in which participants must use a single word to convince a judge of their identity. We illustrate the paradigm by having participants act as contestants or judges in a Minimal Turing Test in which contestants must convince a judge they are a human, rather than an artificial intelligence. We embed the production data from such a large-scale Minimal Turing Test in a semantic vector space, and construct an ordering over pairwise evaluations from judges. This allows us to identify the semantic structure in the words that people give, and to obtain quantitative measures of the importance that people place on different attributes. Ratings from independent coders of the production data provide additional evidence for the agency and experience dimensions discovered in previous work on mind perception. We use the theory of Rational Speech Acts as a framework for interpreting the behavior of contestants and judges in the Minimal Turing Test.

Thursday, April 29, 2021

Why evolutionary psychology should abandon modularity

Pietraszewski, D., & Wertz, A. E. 
(2021, March 29).


A debate surrounding modularity—the notion that the mind may be exclusively composed of distinct systems or modules—has held philosophers and psychologists captive for nearly forty years. Concern about this thesis—which has come to be known as the massive modularity debate—serves as the primary grounds for skepticism of evolutionary psychology’s claims about the mind. Here we will suggest that the entirety of this debate, and the very notion of massive modularity itself, is ill-posed and confused. In particular, it is based on a confusion about the level of analysis (or reduction) at which one is approaching the mind. Here, we will provide a framework for clarifying at what level of analysis one is approaching the mind, and explain how a systemic failure to distinguish between different levels of analysis has led to profound misunderstandings of not only evolutionary psychology, but also of the entire cognitivist enterprise of approaching the mind at the level of mechanism. We will furthermore suggest that confusions between different levels of analysis are endemic throughout the psychological sciences—extending well beyond issues of modularity and evolutionary psychology. Therefore, researchers in all areas should take preventative measures to avoid this confusion in the future.


What has seemed to be an important but interminable debate about the nature of (massive) modularity is better conceptualized as the modularity mistake.  Clarifying the level of analys is at which one is operating will not only resolve the debate, but render it moot.  In its stead, researchers will be free to pursue much simpler, clearer, and more profound questions about how the mind works. If we proceed as usual, we will end up back in the same confused place where we started in another 40 years —arguing once again about who’s on first. Confusing or collapsing across different levels of analysis is not just a problem for modularity and evolutionary psychology.  Rather, it is the greatest problem facing early-21st-century psychology, dwarfing even the current replication crisis. Since at least the days of the neobehaviorists (e.g. Tolman, 1964), the ontology of the intentional level has become mingled with the functional level in all areas of the cognitive sciences (see Stich, 1986). Constructs such as thinking, reasoning, effort, intuition, deliberation, automaticity, and consciousness have become misunderstood and misused as functional level descriptions of how the mind works.  Appeals to  a central agency who uses “their” memory, attention, reasoning, and soon have become commonplace and unremarkable. Even the concept of cognition itself has fallen into the same levels of analysis confusion seen in the modularity mistake.  In the process, a shared notion of what it means to provide a coherent functional level (or mechanistic) description of the mind has been lost.

We do not bring up these broader issues to resolve them here.  Rather, we wish to emphasize what is at stake when it comes to being clear about levels of analysis.  If we do not respect the distinctions between levels, no amount of hard work, nor mountains of data that we will ever collect will resolve the problems created by conflating them.  The only question is whether or not we are willing to begin the slow, difficult — but ultimately clarifying and redeeming — process of unconfounding the intentional and functional levels of analysis. The modularity mistake is as good a place as any to start.

Saturday, December 19, 2020

Robots at work: People prefer—and forgive—service robots with perceived feelings

Yam, K. C, Bingman, Y. E. et. al.
Journal of Applied Psychology. 
Advance online publication. 


Organizations are increasingly relying on service robots to improve efficiency, but these robots often make mistakes, which can aggravate customers and negatively affect organizations. How can organizations mitigate the frontline impact of these robotic blunders? Drawing from theories of anthropomorphism and mind perception, we propose that people evaluate service robots more positively when they are anthropomorphized and seem more humanlike—capable of both agency (the ability to think) and experience (the ability to feel). We further propose that in the face of robot service failures, increased perceptions of experience should attenuate the negative effects of service failures, whereas increased perceptions of agency should amplify the negative effects of service failures on customer satisfaction. In a field study conducted in the world’s first robot-staffed hotel (Study 1), we find that anthropomorphism generally leads to higher customer satisfaction and that perceived experience, but not agency, mediates this effect. Perceived experience (but not agency) also interacts with robot service failures to predict customer satisfaction such that high levels of perceived experience attenuate the negative impacts of service failures on customer satisfaction. We replicate these results in a lab experiment with a service robot (Study 2). Theoretical and practical implications are discussed.

From Practical Contributions

Second, our findings also suggest that organizations should focus on encouraging perceptions of service robots’ experience rather than agency. For example, when assigning names to robots or programming robots’ voices, a female name and voice could potentially lead to enhanced perceptions of experience more so than a male name and voice (Gray et al., 2007). Likewise, service robots’ programmed scripts should include content that conveys the capacity of experience, such as displaying emotions. Although
the emerging service robotic technologies are not perfect and failures are inevitable, encouraging anthropomorphism and, more specifically, perceptions of experience can likely offset the negative effects of robot service failures.

Wednesday, November 25, 2020

The subjective turn

Jon Stewart
Originally posted 2 Nov 20

What is the human being? Traditionally, it was thought that human nature was something fixed, given either by nature or by God, once and for all. Humans occupy a unique place in creation by virtue of a specific combination of faculties that they alone possess, and this is what makes us who we are. This view comes from the schools of ancient philosophy such as Platonism, Aristotelianism and Stoicism, as well as the Christian tradition. More recently, it has been argued that there is actually no such thing as human nature but merely a complex set of behaviours and attitudes that can be interpreted in different ways. For this view, all talk of a fixed human nature is merely a naive and convenient way of discussing the human experience, but doesn’t ultimately correspond to any external reality. This view can be found in the traditions of existentialism, deconstruction and different schools of modern philosophy of mind.

There is, however, a third approach that occupies a place between these two. This view, which might be called historicism, claims that there is a meaningful conception of human nature, but that it changes over time as human society develops. This approach is most commonly associated with the German philosopher G W F Hegel (1770-1831). He rejects the claim of the first view, that of the essentialists, since he doesn’t think that human nature is something given or created once and for all. But he also rejects the second view since he doesn’t believe that the notion of human nature is just an outdated fiction we’ve inherited from the tradition. Instead, Hegel claims that it’s meaningful and useful to talk about the reality of some kind of human nature, and that this can be understood by an analysis of human development in history. Unfortunately, Hegel wrote in a rather inaccessible fashion, which has led many people to dismiss his views as incomprehensible or confused. His theory of philosophical anthropology, which is closely connected to his theory of historical development, has thus remained the domain of specialists. It shouldn’t.

With his astonishing wealth of knowledge about history and culture, Hegel analyses the ways in which what we today call subjectivity and individuality first arose and developed through time. He holds that, at the beginning of human history, people didn’t conceive of themselves as individuals in the same way that we do today. There was no conception of a unique and special inward sphere that we value so much in our modern self-image. Instead, the ancients conceived of themselves primarily as belonging to a larger group: the family, the tribe, the state, etc. This meant that questions of individual freedom or self-determination didn’t arise in the way that we’re used to understanding them.

Monday, January 7, 2019

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Monday, December 31, 2018

How free is our will?

Kevin Mitchell
Wiring The Brain Blog
Originally posted November 25, 2018

Here is an excerpt:

Being free – to my mind at least – doesn’t mean making decisions for no reasons, it means making them for your reasons. Indeed, I would argue that this is exactly what is required to allow any kind of continuity of the self. If you were just doing things on a whim all the time, what would it mean to be you? We accrue our habits and beliefs and intentions and goals over our lifetime, and they collectively affect how actions are suggested and evaluated.

Whether we are conscious of that is another question. Most of our reasons for doing things are tacit and implicit – they’ve been wired into our nervous systems without our even being aware of them. But they’re still part of us ­– you could argue they’re precisely what makes us us. Even if most of that decision-making happens subconsciously, it’s still you doing it.

Ultimately, whether you think you have free will or not may depend less on the definition of “free will” and more on the definition of “you”. If you identify just as the president – the decider-in-chief – then maybe you’ll be dismayed at how little control you seem to have or how rarely you really exercise it. (Not never, but maybe less often than your ego might like to think).

But that brings us back to a very dualist position, identifying you with only your conscious mind, as if it can somehow be separated from all the underlying workings of your brain. Perhaps it’s more appropriate to think that you really comprise all of the machinery of government, even the bits that the president never sees or is not even aware exists.

The info is here.

Saturday, December 22, 2018

Complexities for Psychiatry's Identity As a Medical Specialty

Mohammed Abouelleil Rashed
Kan Zaman Blog
Originally posted November 23, 2018

Here is an excerpt:

Doctors, researchers, governments, pharmaceutical companies, and patient groups each have their own interests and varying abilities to influence the construction of disease categories. This creates the possibility for disagreement over the legitimacy of certain conditions, something we can see playing out in the ongoing debates surrounding Chronic Fatigue Syndrome, a condition that “receives much more attention from its sufferers and their supporters than from the medical community” (Simon 2011: 91). And, in psychiatry, it has long been noted that some major pharmaceutical companies influence the construction of disorder in order to create a market for the psychotropic drugs they manufacture. From the perspective of medical anti-realism (in the constructivist form presented here), these influences are no longer seen as a hindrance to the supposedly objective, ‘natural kind’ status of disease categories, but as key factors involved in their construction. Thus, the lobbying power of the American Psychiatric Association, the vested interests of pharmaceutical companies, and the desire of psychiatrists as a group to maintain their prestige do not undermine the identity of psychiatry as a medical specialty; what they do is highlight the importance of emphasizing the interests of patient groups as well as utilitarian and economic criteria to counteract and respond to the other interests. Medical constructivism is not a uniquely psychiatric ontology, it is a medicine-wide ontology; it applies to schizophrenia as it does to hypertension, appendicitis, and heart disease. Owing to the normative complexity of psychiatry (outlined earlier) and to the fact that loss of freedom is often involved in psychiatric practice, the vested interests involved in psychiatry are more complex and harder to resolve than in many other medical specialties. But that in itself is not a hindrance to psychiatry’s identity as a medical speciality.

The info is here.

Friday, December 7, 2018

Neuroexistentialism: A New Search for Meaning

Owen Flanagan and Gregg D. Caruso
The Philosopher's Magazine
Originally published November 6, 2018

Existentialisms are responses to recognisable diminishments in the self-image of persons caused by social or political rearrangements or ruptures, and they typically involve two steps: (a) admission of the anxiety and an analysis of its causes, and (b) some sort of attempt to regain a positive, less anguished, more hopeful image of persons. With regard to the first step, existentialisms typically involve a philosophical expression of the anxiety that there are no deep, satisfying answers that make sense of the human predicament and explain what makes human life meaningful, and thus that there are no secure foundations for meaning, morals, and purpose. There are three kinds of existentialisms that respond to three different kinds of grounding projects – grounding in God’s nature, in a shared vision of the collective good, or in science. The first-wave existentialism of Kierkegaard, Dostoevsky, and Nietzsche expressed anxiety about the idea that meaning and morals are made secure because of God’s omniscience and good will. The second-wave existentialism of Sartre, Camus, and de Beauvoir was a post-Holocaust response to the idea that some uplifting secular vision of the common good might serve as a foundation. Today, there is a third-wave existentialism, neuroexistentialism, which expresses the anxiety that, even as science yields the truth about human nature, it also disenchants.

Unlike the previous two waves of existentialism, neuroexistentialism is not caused by a problem with ecclesiastical authority, nor by the shock of coming face to face with the moral horror of nation state actors and their citizens. Rather, neuroexistentialism is caused by the rise of the scientific authority of the human sciences and a resultant clash between the scientific and humanistic image of persons. Neuroexistentialism is a twenty-first-century anxiety over the way contemporary neuroscience helps secure in a particularly vivid way the message of Darwin from 150 years ago: that humans are animals – not half animal, not some percentage animal, not just above the animals, but 100 percent animal. Everyday and in every way, neuroscience removes the last vestiges of an immaterial soul or self. It has no need for such posits. It also suggest that the mind is the brain and all mental processes just are (or are realised in) neural processes, that introspection is a poor instrument for revealing how the mind works, that there is no ghost in the machine or Cartesian theatre where consciousness comes together, that death is the end since when the brain ceases to function so too does consciousness, and that our sense of self may in part be an illusion.

The info is here.

Sunday, September 9, 2018

People Are Averse to Machines Making Moral Decisions

Yochanan E. Bigman and Kurt Gray
In press, Cognition


Do people want autonomous machines making moral decisions? Nine studies suggest that that
the answer is ‘no’—in part because machines lack a complete mind. Studies 1-6 find that people
are averse to machines making morally-relevant driving, legal, medical, and military decisions,
and that this aversion is mediated by the perception that machines can neither fully think nor
feel. Studies 5-6 find that this aversion exists even when moral decisions have positive outcomes.
Studies 7-9 briefly investigate three potential routes to increasing the acceptability of machine
moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’
perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9).
Although some of these routes show promise, the aversion to machine moral decision-making is
difficult to eliminate. This aversion may prove challenging for the integration of autonomous
technology in moral domains including medicine, the law, the military, and self-driving vehicles.

The research is here.

Saturday, August 18, 2018

Rationalization is rational

Fiery Cushman
Uploaded July 18, 2018


Rationalization occurs when a person has performed an action and then concoct the beliefs and desires that would have made it rational. Then, people often adjust their own beliefs and desires to match the concocted ones. While many studies demonstrate rationalization, and a few theories identify its underlying cognitive mechanisms, we have little understanding of its its function. Why is the mind designed to construct post hoc rationalizations of its behavior, and then to adopt them? This design may accomplish an important task: to transfer information between the many different processes and representations that influence our behavior. Human decision-making does not rely on a single process; it is influenced by reason, habit, instincts, cultural norms and so on. Several of the processes that influence our behavior are not organized according to rational choice (i.e., maximizing desires conditioned on belief). Thus, rationalization extracts implicit information—true beliefs and useful desires—from the influence of these non-rational systems on behavior. This is not a process of self-perception as traditionally conceived, in which one infers the hidden contents of unconscious reasons. Rather, it is a useful fiction. It is a fiction because it imputes reason to non-rational psychological processes; it is useful because it can improve subsequent reasoning. More generally, rationalization is one example of broader class of “representational exchange” mechanisms, which transfer of information between many different psychological processes that guide our behavior. This perspective reveals connections to theory of mind, inverse reinforcement learning, and reflective equilibrium.

The paper is here.

Asking patients why they engaged in a behavior is another example of useful fiction.  Dr. Cushman suggests psychologists ask: What made that worth doing?

Tuesday, August 14, 2018

The developmental and cultural psychology of free will

Tamar Kushnir
Philosophy Compass
Originally published July 12, 2018


This paper provides an account of the developmental origins of our belief in free will based on research from a range of ages—infants, preschoolers, older children, and adults—and across cultures. The foundations of free will beliefs are in infants' understanding of intentional action—their ability to use context to infer when agents are free to “do otherwise” and when they are constrained. In early childhood, new knowledge about causes of action leads to new abilities to imagine constraints on action. Moreover, unlike adults, young children tend to view psychological causes (i.e., desires) and social causes (i.e., following rules or group norms, being kind or fair) of action as constraints on free will. But these beliefs change, and also diverge across cultures, corresponding to differences between Eastern and Western philosophies of mind, self, and action. Finally, new evidence shows developmentally early, culturally dependent links between free will beliefs and behavior, in particular when choice‐making requires self‐control.

Here is part of the Conclusion:

I've argued here that free will beliefs are early‐developing and culturally universal, and that the folk psychology of free will involves considering actions in the context of alternative possibilities and constraints on possibility. There are developmental differences in how children reason about the possibility of acting against desires, and there are both developmental and cultural differences in how children consider the social and moral limitations on possibility.  Finally, there is new evidence emerging for developmentally early, culturally moderated links between free will beliefs and willpower, delay of gratification, and self‐regulation.

The article is here.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Friday, August 11, 2017

The real problem (of consciousness)

Anil K Seth
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Thursday, July 6, 2017

What the Rise of Sentient Robots Will Mean for Human Beings

George Musser
Originally posted June 19, 2017

Here is an excerpt:

“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.

No one claims that robots have a rich inner experience — that they have pride in floors they've vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.

Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are opaque. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.

Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.

The article is here.

Monday, August 8, 2016

Why You Don’t Know Your Own Mind

By Alex Rosenberg
The New York Times
Originally published July 18, 2016

Here is an excerpt:

In fact, controlled experiments in cognitive science, neuroimaging and social psychology have repeatedly shown how wrong we can be about our real motivations, the justification of firmly held beliefs and the accuracy of our sensory equipment. This trend began even before the work of psychologists such as Benjamin Libet, who showed that the conscious feeling of willing an act actually occurs after the brain process that brings about the act — a result replicated and refined hundreds of times since his original discovery in the 1980s.

Around the same time, a physician working in Britain, Lawrence Weiskrantz, discovered “blindsight” — the ability, first of blind monkeys, and then of some blind people, to pick out objects by their color without the conscious sensation of color. The inescapable conclusion that behavior can be guided by visual information even when we cannot be aware of having it is just one striking example of how the mind is fooled and the ways it fools itself.

The entire article is here.

Thursday, May 26, 2016

Morality When the Mind is Unknowable

By Rita A. McNamara
Character and Content
Originally posted on May 2, 2016

Here is an excerpt:

Our ability to infer the presence and content of other minds is a fundamental building block underlying the intuitions about right and wrong that we use to navigate our social worlds. People living in Western societies often identify internal motives, dispositions, and desires as the causes of all human action. That these behavioral drivers are inside of another mind is not an issue because, in this Western model of mind, people can be read like books – observers can infer other people’s motives and desires and use these inferences to understand and predict behavior. Given this Western model of mind as an internally coherent, autonomous driver of action, the effort spent on determining whether Martin meant to harm Barras seems so obviously justified as to go without question. But this is not necessarily the case for all cultures.

In many societies, people focus far more on relational ties and polite observance of social duties than on internal mental states. On the other end of the cultural spectrum of mental state focus, some small-scale societies have ‘Opacity of Mind’ norms that directly prohibit inference about mental states. In contrast to the Western model of mind, these Opacity of Mind norms often suggest that it is either impossible to know what another person is thinking, or rude to intrude into others’ private mental space. So, while mental state reasoning is a key foundation for intuitions about right and wrong, these intuitions and mental state perceptions are also dependent upon cultural influences.

The information is here.

Monday, December 7, 2015

Everyone Else Could Be a Mindless Zombie

By Kurt Gray
Time Magazine
Originally posted November 17, 2015

Here is an excerpt:

Our research reveals that whether something can think or feel is mostly a matter of perception, which can lead to bizarre reversals. Objectively speaking, humans are smarter than cats, and yet people treat their pets like people and the homeless like objects. Objectively speaking, pigs are smarter than baby seals, but people will scream about seal clubbing while eating a BLT.

That minds are perceived spells trouble for political harmony. When people see minds differently in chickens, fetuses, and enemy combatants, it leads to conflicts about vegetarianism, abortion, and torture. Despite facilitating these debates, mind perception can make our moral opponents seem more humans and less monstrous. With abortion, both liberals and conservatives agree that killing babies is immoral, and disagree only about whether a fetus is a baby or a mass of mindless cells.

The entire article is here.