Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Brain. Show all posts
Showing posts with label Brain. Show all posts

Tuesday, February 6, 2024

Anthropomorphism in AI

Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350


AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.

Here are my thoughts:

Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.

There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.

Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.

However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings.  For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.

This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.

It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.

In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.

Saturday, January 13, 2024

Consciousness does not require a self

James Coook
Originally published 14 DEC 23

Here is an excerpt:

Beyond the neuroscientific study of consciousness, phenomenological analysis also reveals the self to not be the possessor of experience. In mystical experiences induced by meditation or psychedelics, individuals typically enter a mode of experience in which the psychological self is absent, yet consciousness remains. While this is not the default state of the mind, the presence of consciousness in the absence of a self shows that consciousness is not dependent on an experiencing subject. What is consciousness if not a capacity of an experiencing subject? Such an experience reveals consciousness to consist of a formless awareness at its core, an empty space in which experience arises, including the experience of being a self. The self does not possess consciousness, consciousness is the experiential space in which the image of a psychological self can appear. This mode of experience can be challenging to conceptualise but is very simple when experienced – it is a state of simple appearances arising without the extra add-on of a psychological self inspecting them.

We can think of a conscious system as a system that is capable of holding beliefs about the qualitative character of the world. We should not think of belief here as referring to complex conceptual beliefs, such as believing that Paris is the capital of France, but as the simple ability to hold that the world is a certain way. You do this when you visually perceive a red apple in front of you, the experience is one of believing the apple to exist with all of its qualities such as roundness and redness. This way of thinking is in line with the work of Immanuel Kant, who argued that we never come to know reality as it is but instead only experience phenomenal representations of reality [9]. We are not conscious of the world as it is, but as we believe it to be.

Here is my take:

For centuries, we've assumed consciousness and the sense of self are one and the same. This article throws a wrench in that assumption, proposing that consciousness can exist without a self. Imagine experiencing sights, sounds, and sensations without the constant "me" narrating it all. That's what "selfless consciousness" means – raw awareness untouched by self-reflection.

The article then posits that our familiar sense of self, complete with its stories and memories, isn't some fundamental truth but rather a clever prediction concocted by our brains. This "predicted self" helps us navigate the world and interact with others, but it's not necessarily who we truly are.

Decoupling consciousness from the self opens a Pandora's box of possibilities. We might find consciousness in unexpected places, like animals or even artificial intelligence. Understanding brain function could shift dramatically, and our very notions of identity, free will, and reality might need a serious rethink. This is a bold new perspective on what it means to be conscious, and its implications are quite dramatic.

Saturday, November 26, 2022

Why are scientists growing human brain cells in the lab?

Hannah Flynn
Medical News Today
Originally posted 24 OCT 22

Here is an excerpt:

Ethical boundaries

One of the limitations of using organoids for research is that it is observed in vitro. The way an organ might act in a system, in connection with different organs, or when exposed to metabolites in the blood, for example, could be different from how it behaves when cells are isolated in a single tissue.

More recently, researchers placed an organoid derived from human cells inside the brain of a rat, in a study outlined in Nature.

Using neural organoids that had been allowed to self-organize, these were implanted into the somatosensory cortex — which is in the middle of the brain — of newborn rats. The scientists then found that these cortical organoids had grown axons throughout the rat brain, and were able to contribute to reward-seeking behavior in the rat.

This breakthrough suggested that the lab-created cells are recognizable to other tissues in the body and can influence systems.

Combining the cells of animals and humans is not without some ethical considerations. In fact, this has been the focus of a recent project.

The Brainstorm Organoid Project published its first paper in the form of a comment piece outlining the benefits of the project in Nature Neuroscience on October 18, 2022, the week after the aforementioned study was published.

The Project brought together prominent bioethicists as part of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative of the US National Institutes of Health, which funded the project.

Co-author of the comment piece Dr. Jeantine E Lunshof, head of collaborative ethics at the Wyss Institute for Biologically Inspired Engineering at Harvard University, MA, told Medical News Today in an interview that existing biomedical research and animal welfare guidelines already provide a framework for this type of work to be done ethically.

Pointing to the International Society for Stem Cell Research guidelines published last year, she stated that those do cover the creation of chimeras, where cells of two species are combined.

These hybrids with non-primates are permitted, she explained: “This is very, very strong emphasis on animal welfare in this ISSCR guideline document that also aligns with existing animal welfare and animal research protocols.”

The potential benefits of this research needed to be considered, “though at this moment, we are still at the stage that a lot of fundamental research is necessary. And I think that that really must be emphasized,” she said.

Sunday, November 6, 2022

‘Breakthrough’ finding shows how modern humans grow more brain cells than Neanderthals

Rodrigo Pérez Ortega
Originally posted 8 SEP 22

We humans are proud of our big brains, which are responsible for our ability to plan ahead, communicate, and create. Inside our skulls, we pack, on average, 86 billion neurons—up to three times more than those of our primate cousins. For years, researchers have tried to figure out how we manage to develop so many brain cells. Now, they’ve come a step closer: A new study shows a single amino acid change in a metabolic gene helps our brains develop more neurons than other mammals—and more than our extinct cousins, the Neanderthals.

The finding “is really a breakthrough,” says Brigitte Malgrange, a developmental neurobiologist at the University of Liège who was not involved in the study. “A single amino acid change is really, really important and gives rise to incredible consequences regarding the brain.”

What makes our brain human has been the interest of neurobiologist Wieland Huttner at the Max Planck Institute of Molecular Cell Biology and Genetics for years. In 2016, his team found that a mutation in the ARHGAP11B gene, found in humans, Neanderthals, and Denisovans but not other primates, caused more production of cells that develop into neurons. Although our brains are roughly the same size as those of Neanderthals, our brain shapes differ and we created complex technologies they never developed. So, Huttner and his team set out to find genetic differences between Neanderthals and modern humans, especially in cells that give rise to neurons of the neocortex. This region behind the forehead is the largest and most recently evolved part of our brain, where major cognitive processes happen.

The team focused on TKTL1, a gene that in modern humans has a single amino acid change—from lysine to arginine—from the version in Neanderthals and other mammals. By analyzing previously published data, researchers found that TKTL1 was mainly expressed in progenitor cells called basal radial glia, which give rise to most of the cortical neurons during development.

Sunday, August 21, 2022

Medial and orbital frontal cortex in decision-making and flexible behavior

Klein-Flügge, M. C., Bongioanni, A., & 
Rushworth, M. F. (2022).


The medial frontal cortex and adjacent orbitofrontal cortex have been the focus of investigations of decision-making, behavioral flexibility, and social behavior. We review studies conducted in humans, macaques, and rodents and argue that several regions with different functional roles can be identified in the dorsal anterior cingulate cortex, perigenual anterior cingulate cortex, anterior medial frontal cortex, ventromedial prefrontal cortex, and medial and lateral parts of the orbitofrontal cortex. There is increasing evidence that the manner in which these areas represent the value of the environment and specific choices is different from subcortical brain regions and more complex than previously thought. Although activity in some regions reflects distributions of reward and opportunities across the environment, in other cases, activity reflects the structural relationships between features of the environment that animals can use to infer what decision to take even if they have not encountered identical opportunities in the past.


Neural systems that represent the value of the environment exist in many vertebrates. An extended subcortical circuit spanning the striatum, midbrain, and brainstem nuclei of mammals corresponds to these ancient systems. In addition, however, mammals possess several frontal cortical regions concerned with guidance of decision-making and adaptive, flexible behavior. Although these frontal systems interact extensively with these subcortical circuits, they make specific contributions to behavior and also influence behavior via other cortical routes. Some areas such as the ACC, which is present in a broad range of mammals, represent the distribution of opportunities in an environment over space and time, whereas other brain regions such as amFC and dmPFC have roles in representing structural associations and causal links between environmental features, including aspects of the social environment (Figure 8). Although the origins of these areas and their functions are traceable to rodents, they are especially prominent in primates. They make it possible not just to select choices on the basis of past experience of identical situations, but to make inferences to guide decisions in new scenarios.

Tuesday, January 18, 2022

MIT Researchers Just Discovered an AI Mimicking the Brain on Its Own

Eric James Beyer
Interesting Engineering
Originally posted 18 DEC 21

Here is an excerpt:

In the wake of these successes, Martin began to wonder whether or not the same principle could be applied to higher-level cognitive functions like language processing. 

“I said, let’s just look at neural networks that are successful and see if they’re anything like the brain. My bet was that it would work, at least to some extent.”

To find out, Martin and colleagues compared data from 43 artificial neural network language models against fMRI and ECoG neural recordings taken while subjects listened to or read words as part of a text. The AI models the group surveyed covered all the major classes of available neural network approaches for language-based tasks. Some of them were more basic embedding models like GloVe, which clusters semantically similar words together in groups. Others, like the models known as GPT and BERT, were far more complex. These models are trained to predict the next word in a sequence or predict a missing word within a certain context, respectively. 

“The setup itself becomes quite simple,” Martin explains. “You just show the same stimuli to the models that you show to the subjects [...]. At the end of the day, you’re left with two matrices, and you test if those matrices are similar.”

And the results? 

“I think there are three-and-a-half major findings here,” Schrimpf says with a laugh. “I say ‘and a half’ because the last one we still don’t fully understand.”

Machine learning that mirrors the brain

The finding that sticks out to Martin most immediately is that some of the models predict neural data extremely well. In other words, regardless of how good a model was at performing a task, some of them appear to resemble the brain’s cognitive mechanics for language processing. Intriguingly, the team at MIT identified the GPT model variants as the most brain-like out of the group they looked at.

Saturday, September 25, 2021

The prefrontal cortex and (uniquely) human cooperation: a comparative perspective

Zoh, Y., Chang, S.W.C. & Crockett, M.J.
Neuropsychopharmacol. (2021). 


Humans have an exceptional ability to cooperate relative to many other species. We review the neural mechanisms supporting human cooperation, focusing on the prefrontal cortex. One key feature of human social life is the prevalence of cooperative norms that guide social behavior and prescribe punishment for noncompliance. Taking a comparative approach, we consider shared and unique aspects of cooperative behaviors in humans relative to nonhuman primates, as well as divergences in brain structure that might support uniquely human aspects of cooperation. We highlight a medial prefrontal network common to nonhuman primates and humans supporting a foundational process in cooperative decision-making: valuing outcomes for oneself and others. This medial prefrontal network interacts with lateral prefrontal areas that are thought to represent cooperative norms and modulate value representations to guide behavior appropriate to the local social context. Finally, we propose that more recently evolved anterior regions of prefrontal cortex play a role in arbitrating between cooperative norms across social contexts, and suggest how future research might fruitfully examine the neural basis of norm arbitration.


The prefrontal cortex, in particular its more anterior regions, has expanded dramatically over the course of human evolution. In tandem, the scale and scope of human cooperation has dramatically outpaced its counterparts in nonhuman primate species, manifesting as complex systems of moral codes that guide normative behaviors even in the absence of punishment or repeated interactions. Here, we provided a selective review of the neural basis of human cooperation, taking a comparative approach to identify the brain systems and social behaviors that are thought to be unique to humans. Humans and nonhuman primates alike cooperate on the basis of kinship and reciprocity, but humans are unique in their abilities to represent shared goals and self-regulate to comply with and enforce cooperative norms on a broad scale. We highlight three prefrontal networks that contribute to cooperative behavior in humans: a medial prefrontal network, common to humans and nonhuman primates, that values outcomes for self and others; a lateral prefrontal network that guides cooperative goal pursuit by modulating value representations in the context of local norms; and an anterior prefrontal network that we propose serves uniquely human abilities to reflect on one’s own behavior, commit to shared social contracts, and arbitrate between cooperative norms across diverse social contexts. We suggest future avenues for investigating cooperative norm arbitration and how it is implemented in prefrontal networks.

Sunday, June 20, 2021

Artificial intelligence research may have hit a dead end

Thomas Nail
Originally published 30 April 21

Here is an excerpt:

If it's true that cognitive fluctuations are requisite for consciousness, it would also take time for stable frequencies to emerge and then synchronize with one another in resting states. And indeed, this is precisely what we see in children's brains when they develop higher and more nested neural frequencies over time.

Thus, a general AI would probably not be brilliant in the beginning. Intelligence evolved through the mobility of organisms trying to synchronize their fluctuations with the world. It takes time to move through the world and learn to sync up with it. As the science fiction author Ted Chiang writes, "experience is algorithmically incompressible." 

This is also why dreaming is so important. Experimental research confirms that dreams help consolidate memories and facilitate learning. Dreaming is also a state of exceptionally playful and freely associated cognitive fluctuations. If this is true, why should we expect human-level intelligence to emerge without dreams? This is why newborns dream twice as much as adults, if they dream during REM sleep. They have a lot to learn, as would androids.

In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.

Friday, April 16, 2021

Reduced decision bias and more rational decision making following ventromedial prefrontal cortex damage

S. Manohar, et al.
Cortex, Volume 138, 
May 2021, Pages 24-37


Human decisions are susceptible to biases, but establishing causal roles of brain areas has proved to be difficult. Here we studied decision biases in 17 people with unilateral medial prefrontal cortex damage and a rare patient with bilateral ventromedial prefrontal cortex (vmPFC) lesions. Participants learned to choose which of two options was most likely to win, and then bet money on the outcome. Thus, good performance required not only selecting the best option, but also the amount to bet. Healthy people were biased by their previous bet, as well as by the unchosen option's value. Unilateral medial prefrontal lesions reduced these biases, leading to more rational decisions. Bilateral vmPFC lesions resulted in more strategic betting, again with less bias from the previous trial, paradoxically improving performance overall. Together, the results suggest that vmPFC normally imposes contextual biases, which in healthy people may actually be suboptimal in some situations.

From the Discussion

The findings presented here show that it is indeed possible for more rational decision making to emerge at least on a value based reversal learning task after bilateral vmPFC lesions. This is not to say that all decisions and behaviours become more rational after such brain damage. Clearly, although he managed to continue to work in a demanding job, patient MJ showed evidence of dysfunction in social cognition
and some aspects of decision making and judgment in everyday life, just as previous reported cases (Bechara et al., 2000; Berlin et al., 2004; Eslinger & Damasio, 1985; ShamayTsoory et al., 2005).

There is some previous circumstantial evidence that mPFC lesions may reduce decision biases. For example, patients with mPFC damage show smaller biases in probabilistic estimation (O’Callaghan et al., 2018), reduced affective contributions to reasoning (Shamay-Tsoory et al., 2005), and may indeed make more utilitarian moral judgements, suggesting more rational valuation with less affective bias (Ciaramelli
et al., 2007; Koenigs et al., 2007; Krajbich et al., 2009). These effects might be underpinned by a more general increase in rationality after damage to this region. One possible explanation for this is that individuals with vmPFC lesions might be free of affective biases that normally contribute to such decision making but this remains to be established.

Friday, April 2, 2021

Neuroscience shows how interconnected we are – even in a time of isolation

Lisa Feldman Barrett
The Guardian
Originally posted 10 Feb 21

Here is an excerpt:

Being the caretakers of each other’s body budgets is challenging when so many of us feel lonely or are physically alone. But social distancing doesn’t have to mean social isolation. Humans have a special power to connect with and regulate each other in another way, even at a distance: with words. If you’ve ever received a text message from a loved one and felt a rush of warmth, or been criticised by your boss and felt like you’d been punched in the gut, you know what I’m talking about. Words are tools for regulating bodies.

In my research lab, we run experiments to demonstrate this power of words. Our participants lie still in a brain scanner and listen to evocative descriptions of different situations. One is about walking into your childhood home and being smothered in hugs and smiles. Another is about awakening to your buzzing alarm clock and finding a sweet note from your significant other. As they listen, we see increased activity in brain regions that control heart rate, breathing, metabolism and the immune system. Yes, the same brain regions that process language also help to run your body budget. Words have power over your biology – your brain wiring guarantees it.

Our participants also had increased activity in brain regions involved in vision and movement, even though they were lying still with their eyes closed. Their brains were changing the firing of their own neurons to simulate sight and motion in their mind’s eye. This same ability can build a sense of connection, from a few seconds of poor-quality mobile phone audio, or from a rectangle of pixels in the shape of a friend’s face. Your brain fills in the gaps – the sense data that you don’t receive through these media – and can ease your body budget deficit in the moment.

In the midst of social distancing, my Zoom friend and I rediscovered the body-budgeting benefits of older means of communication, such as letter writing. The handwriting of someone we care about can have an unexpected emotional impact. A piece of paper becomes a wave of love, a flood of gratitude, a belly-aching laugh.

Wednesday, November 25, 2020

The subjective turn

Jon Stewart
Originally posted 2 Nov 20

What is the human being? Traditionally, it was thought that human nature was something fixed, given either by nature or by God, once and for all. Humans occupy a unique place in creation by virtue of a specific combination of faculties that they alone possess, and this is what makes us who we are. This view comes from the schools of ancient philosophy such as Platonism, Aristotelianism and Stoicism, as well as the Christian tradition. More recently, it has been argued that there is actually no such thing as human nature but merely a complex set of behaviours and attitudes that can be interpreted in different ways. For this view, all talk of a fixed human nature is merely a naive and convenient way of discussing the human experience, but doesn’t ultimately correspond to any external reality. This view can be found in the traditions of existentialism, deconstruction and different schools of modern philosophy of mind.

There is, however, a third approach that occupies a place between these two. This view, which might be called historicism, claims that there is a meaningful conception of human nature, but that it changes over time as human society develops. This approach is most commonly associated with the German philosopher G W F Hegel (1770-1831). He rejects the claim of the first view, that of the essentialists, since he doesn’t think that human nature is something given or created once and for all. But he also rejects the second view since he doesn’t believe that the notion of human nature is just an outdated fiction we’ve inherited from the tradition. Instead, Hegel claims that it’s meaningful and useful to talk about the reality of some kind of human nature, and that this can be understood by an analysis of human development in history. Unfortunately, Hegel wrote in a rather inaccessible fashion, which has led many people to dismiss his views as incomprehensible or confused. His theory of philosophical anthropology, which is closely connected to his theory of historical development, has thus remained the domain of specialists. It shouldn’t.

With his astonishing wealth of knowledge about history and culture, Hegel analyses the ways in which what we today call subjectivity and individuality first arose and developed through time. He holds that, at the beginning of human history, people didn’t conceive of themselves as individuals in the same way that we do today. There was no conception of a unique and special inward sphere that we value so much in our modern self-image. Instead, the ancients conceived of themselves primarily as belonging to a larger group: the family, the tribe, the state, etc. This meant that questions of individual freedom or self-determination didn’t arise in the way that we’re used to understanding them.

Thursday, September 24, 2020

Neural signatures of prosocial behaviors

Bellucci, G., Camilleri, J., and others
Neuroscience & Biobehavioral Reviews
Volume 118, November 2020, Pages 186-195


Prosocial behaviors are hypothesized to require socio-cognitive and empathic abilities—engaging brain regions attributed to the mentalizing and empathy brain networks. Here, we tested this hypothesis with a coordinate-based meta-analysis of 600 neuroimaging studies on prosociality, mentalizing and empathy (∼12,000 individuals). We showed that brain areas recruited by prosocial behaviors only partially overlap with the mentalizing (dorsal posterior cingulate cortex) and empathy networks (middle cingulate cortex). Additionally, the dorsolateral and ventromedial prefrontal cortices were preferentially activated by prosocial behaviors. Analyses on the functional connectivity profile and functional roles of the neural patterns underlying prosociality revealed that in addition to socio-cognitive and empathic processes, prosocial behaviors further involve evaluation processes and action planning, likely to select the action sequence that best satisfies another person’s needs. By characterizing the multidimensional construct of prosociality at the neural level, we provide insights that may support a better understanding of normal and abnormal social cognition (e.g., psychopathy).


• A psychological proposal posits prosociality engages brain regions of the mentalizing and empathy networks.

• Our meta-analysis provides only partial support to this proposal.

• Prosocial behaviors engage brain regions associated with socio-cognitive and empathic abilities.

• However, they also engage brain regions associated with evaluation and planning.


Taken together, we found a set of brain regions that were consistently activated by prosocial behaviors. These activation patterns partially overlapped with mentalizing and empathy brain regions, lending support to the hypothesis based on psychological research that socio-cognitive and empathic abilities are central to prosociality. However, we also found that the vmPFC and, in particular, the dlPFC were preferentially recruited by prosocial acts, suggesting that prosocial behaviors require the involvement of other important processes. Analyses on their functional connectivity profile and functional roles suggest that the vmPFC and dlPFC might be involved in valuation and planning of prosocial actions, respectively. These results clarify the role of mentalizing and empathic abilities in prosociality and provides useful insights into the neuropsychological processes underlying human social behaviors. For instance, they might help understand where and how things go awry in different neural and behavioral disorders such as psychopathy and antisocial behavior (Blair, 2007).

The research is here.

Sunday, May 10, 2020

Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly

Pim Haselager & Giulio Mecacci (2020)
AJOB Neuroscience, 11:2, 113-119
DOI: 10.1080/21507740.2020.1740353


The human species is combining an increased understanding of our cognitive machinery with the development of a technology that can profoundly influence our lives and our ways of living together. Our sciences enable us to see our strengths and weaknesses, and build technology accordingly. What would future historians think of our current attempts to build increasingly smart systems, the purposes for which we employ them, the almost unstoppable goldrush toward ever more commercially relevant implementations, and the risk of superintelligence? We need a more profound reflection on what our science shows us about ourselves, what our technology allows us to do with that, and what, apparently, we aim to do with those insights and applications. As the smartest species on the planet, we don’t need more intelligence. Since we appear to possess an underdeveloped capacity to act ethically and empathically, we rather require the kind of technology that enables us to act more consistently upon ethical principles. The problem is not to formulate ethical rules, it’s to put them into practice. Cognitive neuroscience and AI provide the knowledge and the tools to develop the moral crutches we so clearly require. Why aren’t we building them? We don’t need superintelligence, we need superethics.

The article is here.

Sunday, April 5, 2020

Why your brain is not a computer

Matthew Cobb
Originally posted 27 Feb 20

Here is an excerpt:

The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.

By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.

The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.

Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.

The info is here.

Friday, February 21, 2020

Friends or foes: Is empathy necessary for moral behavior?

Jean Decety and Jason M. Cowell
Perspect Psychol Sci. 2014 Sep; 9(4): 525–537.
doi: 10.1177/1745691614545130


The past decade has witnessed a flurry of empirical and theoretical research on morality and empathy, as well as increased interest and usage in the media and the public arena. At times, in both popular and academia, morality and empathy are used interchangeably, and quite often the latter is considered to play a foundational role for the former. In this article, we argue that, while there is a relationship between morality and empathy, it is not as straightforward as apparent at first glance. Moreover, it is critical to distinguish between the different facets of empathy (emotional sharing, empathic concern, and perspective taking), as each uniquely influences moral cognition and predicts differential outcomes in moral behavior. Empirical evidence and theories from evolutionary biology, developmental, behavioral, and affective and social neuroscience are comprehensively integrated in support of this argument. The wealth of findings illustrates a complex and equivocal relationship between morality and empathy. The key to understanding such relations is to be more precise on the concepts being used, and perhaps abandoning the muddy concept of empathy.

From the Conclusion:

To wrap up on a provocative note, it may be advantageous for the science of morality, in the future, to refrain from using the catch-all term of empathy, which applies to a myriad of processes and phenomena, and as a result yields confusion in both understanding and predictive ability. In both academic and applied domains such medicine, ethics, law and policy, empathy has become an enticing, but muddy notion, potentially leading to misinterpretation. If ancient Greek philosophy has taught us anything, it is that when a concept is attributed with so many meanings, it is at risk for losing function.

The article is here.

Monday, February 10, 2020

The medications that change who we are

Zaria Gorvett
Originally published 8 Jan 20

Here are two excerpts:

According to Golomb, this is typical – in her experience, most patients struggle to recognise their own behavioural changes, let alone connect them to their medication. In some instances, the realisation comes too late: the researcher was contacted by the families of a number of people, including an internationally renowned scientist and a former editor of a legal publication, who took their own lives.

We’re all familiar with the mind-bending properties of psychedelic drugs – but it turns out ordinary medications can be just as potent. From paracetamol (known as acetaminophen in the US) to antihistamines, statins, asthma medications and antidepressants, there’s emerging evidence that they can make us impulsive, angry, or restless, diminish our empathy for strangers, and even manipulate fundamental aspects of our personalities, such as how neurotic we are.


Research into these effects couldn’t come at a better time. The world is in the midst of a crisis of over-medication, with the US alone buying up 49,000 tonnes of paracetamol every year – equivalent to about 298 paracetamol tablets per person – and the average American consuming $1,200 worth of prescription medications over the same period. And as the global population ages, our drug-lust is set to spiral even further out of control; in the UK, one in 10 people over the age of 65 already takes eight medications every week.

How are all these medications affecting our brains? And should there be warnings on packets?

The info is here.

Tuesday, December 31, 2019

Our Brains Are No Match for Our Technology

Tristan Harris
The New York Times
Originally posted 5 Dec 19

Here is an excerpt:

Our Paleolithic brains also aren’t wired for truth-seeking. Information that confirms our beliefs makes us feel good; information that challenges our beliefs doesn’t. Tech giants that give us more of what we click on are intrinsically divisive. Decades after splitting the atom, technology has split society into different ideological universes.

Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges. The advertising business model built on exploiting this mismatch has created the attention economy. In return, we get the “free” downgrading of humanity.

This leaves us profoundly unsafe. With two billion humans trapped in these environments, the attention economy has turned us into a civilization maladapted for its own survival.

Here’s the good news: We are the only species self-aware enough to identify this mismatch between our brains and the technology we use. Which means we have the power to reverse these trends.

The question is whether we can rise to the challenge, whether we can look deep within ourselves and use that wisdom to create a new, radically more humane technology. “Know thyself,” the ancients exhorted. We must bring our godlike technology back into alignment with an honest understanding of our limits.

This may all sound pretty abstract, but there are concrete actions we can take.

The info is here.

Monday, November 11, 2019

Why a computer will never be truly conscious

Subhash Kak
The Conversation
Originally published October 16, 2019

Here is an excerpt:

Brains don’t operate like computers

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work.

The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.

Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture.

The info is here.

Saturday, October 5, 2019

Brain-reading tech is coming. The law is not ready to protect us.

Sigal Samuel
Originally posted August 30, 2019

Here is an excerpt:

2. The right to mental privacy

You should have the right to seclude your brain data or to publicly share it.

Ienca emphasized that neurotechnology has huge implications for law enforcement and government surveillance. “If brain-reading devices have the ability to read the content of thoughts,” he said, “in the years to come governments will be interested in using this tech for interrogations and investigations.”

The right to remain silent and the principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

3. The right to mental integrity

You should have the right not to be harmed physically or psychologically by neurotechnology.

BCIs equipped with a “write” function can enable new forms of brainwashing, theoretically enabling all sorts of people to exert control over our minds: religious authorities who want to indoctrinate people, political regimes that want to quash dissent, terrorist groups seeking new recruits.

What’s more, devices like those being built by Facebook and Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth signal, increasing or decreasing the voltage of the current that goes to your brain — thus making you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca said, adding, “A hack like this wouldn’t require that much technological sophistication.”

The info is here.

Wednesday, October 2, 2019

Seven Key Misconceptions about Evolutionary Psychology

Image result for evolutionary psychologyLaith Al-Shawaf
Originally published August 20, 2019

Evolutionary approaches to psychology hold the promise of revolutionizing the field and unifying it with the biological sciences. But among both academics and the general public, a few key misconceptions impede its application to psychology and behavior. This essay tackles the most pervasive of these.

Misconception 1: Evolution and Learning Are Conflicting Explanations for Behavior

People often assume that if something is learned, it’s not evolved, and vice versa. This is a misleading way of conceptualizing the issue, for three key reasons.

First, many evolutionary hypotheses are about learning. For example, the claim that humans have an evolved fear of snakes and spiders does not mean that people are born with this fear. Instead, it means that humans are endowed with an evolved learning mechanism that acquires a fear of snakes more easily and readily than other fears. Classic studies in psychology show that monkeys can acquire a fear of snakes through observational learning, and they tend to acquire it more quickly than a similar fear of other objects, such as rabbits or flowers. It is also harder for monkeys to unlearn a fear of snakes than it is to unlearn other fears. As with monkeys, the hypothesis that humans have an evolved fear of snakes does not mean that we are born with this fear. Instead, it means that we learn this fear via an evolved learning mechanism that is biologically prepared to acquire some fears more easily than others.

Second, learning is made possible by evolved mechanisms instantiated in the brain. We are able to learn because we are equipped with neurocognitive mechanisms that enable learning to occur—and these neurocognitive mechanisms were built by evolution. Consider the fact that both children and puppies can learn, but if you try to teach them the same thing—French, say, or game theory—they end up learning different things. Why? Because the dog’s evolved learning mechanisms are different from those of the child. What organisms learn, and how they learn it, depends on the nature of the evolved learning mechanisms housed in their brains.

The info is here.