Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Brain Science. Show all posts
Showing posts with label Brain Science. Show all posts

Friday, March 15, 2024

The consciousness wars: can scientists ever agree on how the mind works?

Mariana Lenharo

Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything.

The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration.

Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between.

The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects.

The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room.
---------------
Here's what the article highlights:
  • Divisions abound: Researchers disagree on the very definition of consciousness, making comparisons between theories difficult. Some focus on subjective experience, while others look at the brain's functions.
  • Testing head-to-head: New research projects are directly comparing competing theories to see which one explains experimental data better. This could be a step towards finding a unifying explanation.
  • Heated debate: The recent critique of one prominent theory, Integrated Information Theory (IIT), shows the depth of the disagreements. Some question its scientific validity, while others defend it as a viable framework.
  • Hope for progress: Despite the disagreements, there's optimism. New research methods and a younger generation of researchers focused on collaboration could lead to breakthroughs in understanding this elusive phenomenon.

Friday, March 1, 2024

AI needs the constraints of the human brain

Danyal Akarca
iai.tv
Originally posted 30 Jan 24

Here is an excerpt:

So, evolution shapes systems that are capable of solving competing problems that are both internal (e.g., how to expend energy) and external (e.g., how to act to survive), but in a way that can be highly efficient, in many cases elegant, and often surprising. But how does this evolutionary story of biological intelligence contrast with the current paradigm of AI?

In some ways, quite directly. Since the 50s, neural networks were developed as models that were inspired directly from neurons in the brain and the strength of their connections, in addition to many successful architectures of the past being directly motivated by neuroscience experimentation and theory. Yet, AI research in the modern era has occurred with a significant absence of thought of intelligent systems in nature and their guiding principles. Why is this? There are many reasons. But one is that the exponential growth of computing capabilities, enabled by increases of transistors on integrated circuits (observed since the 1950s, known as Moore’s Law), has permitted AI researchers to leverage significant improvements in performance without necessarily requiring extraordinarily elegant solutions. This is not to say that modern AI algorithms are not widely impressive – they are. It is just that the majority of the heavy lifting has come from advances in computing power rather than their engineered design. Consequently, there has been relatively little recent need or interest from AI experts to look to the brain for inspiration.

But the tide is turning. From a hardware perspective, Moore’s law will not continue ad infinitum (at 7 nanometers, transistor channel lengths are now nearing fundamental limits of atomic spacing). We will therefore not be able to leverage ever improving performance delivered by increasingly compact microprocessors. It is likely therefore that we will require entirely new computing paradigms, some of which may be inspired by the types of computations we observe in the brain (the most notable being neuromorphic computing). From a software and AI perspective, it is becoming increasingly clear that – in part due to the reliance on increases to computational power – the AI research field will need to refresh its conceptions as to what makes systems intelligent at all. For example, this will require much more sophisticated benchmarks of what it means to perform at human or super-human performance. In sum, the field will need to form a much richer view of the possible space of intelligent systems, and how artificial models can occupy different places in that space.


Key Points:
  • Evolutionary pressures: Efficient, resource-saving brains are advantageous for survival, leading to optimized solutions for learning, memory, and decision-making.
  • AI's reliance on brute force: Modern AI often achieves performance through raw computing power, neglecting principles like energy efficiency.
  • Shifting AI paradigm: Moore's Law's end and limitations in conventional AI call for exploration of new paradigms, potentially inspired by the brain.
  • Neurobiology's potential: Brain principles like network structure, local learning, and energy trade-offs can inform AI design for efficiency and novel functionality.
  • Embodied AI with constraints: Recent research incorporates space and communication limitations into AI models, leading to features resembling real brains and potentially more efficient information processing.

Monday, January 1, 2024

Cyborg computer with living brain organoid aces machine learning tests

Loz Blain
New Atlas
Originally posted 12 DEC 23

Here are two excerpts:

Now, Indiana University researchers have taken a slightly different approach by growing a brain "organoid" and mounting that on a silicon chip. The difference might seem academic, but by allowing the stem cells to self-organize into a three-dimensional structure, the researchers hypothesized that the resulting organoid might be significantly smarter, that the neurons might exhibit more "complexity, connectivity, neuroplasticity and neurogenesis" if they were allowed to arrange themselves more like the way they normally do.

So they grew themselves a little brain ball organoid, less than a nanometer in diameter, and they mounted it on a high-density multi-electrode array – a chip that's able to send electrical signals into the brain organoid, as well as reading electrical signals that come out due to neural activity.

They called it "Brainoware" – which they probably meant as something adjacent to hardware and software, but which sounds far too close to "BrainAware" for my sensitive tastes, and evokes the perpetual nightmare of one of these things becoming fully sentient and understanding its fate.

(cut)

And finally, much like the team at Cortical Labs, this team really has no clear idea what to do about the ethics of creating micro-brains out of human neurons and wiring them into living cyborg computers. “As the sophistication of these organoid systems increases, it is critical for the community to examine the myriad of neuroethical issues that surround biocomputing systems incorporating human neural tissue," wrote the team. "It may be decades before general biocomputing systems can be created, but this research is likely to generate foundational insights into the mechanisms of learning, neural development and the cognitive implications of neurodegenerative diseases."


Here is my summary:

There is a new type of computer chip that uses living brain cells. The brain cells are grown from human stem cells and are organized into a ball-like structure called an organoid. The organoid is mounted on a chip that can send electrical signals to the brain cells and read the electrical signals that the brain cells produce. The researchers found that the organoid could learn to perform tasks such as speech recognition and math prediction much faster than traditional computers. They believe that this new type of computer chip could have many applications, such as in artificial intelligence and medical research. However, there are also some ethical concerns about using living brain cells in computers.

Monday, May 22, 2023

New evaluation guidelines for dementia

The Monitor on Psychology
Vol. 54, No. 3
Print Version: Page 40

Updated APA guidelines are now available to help psychologists evaluate patients with dementia and their caregivers with accuracy and sensitivity and learn about the latest developments in dementia science and practice.

APA Guidelines for the Evaluation of Dementia and Age-Related Cognitive Change (PDF, 992KB) was released in 2021 and reflects updates in the field since the last set of guidelines, released in 2011, said geropsychologist and University of Louisville professor Benjamin T. Mast, PhD, ABPP, who chaired the task force that produced the guidelines.

“These guidelines aspire to help psychologists gain not only a high level of technical expertise in understanding the latest science and procedures for evaluating dementia,” he said, “but also have a high level of sensitivity and empathy for those undergoing a life change that can be quite challenging.”

Major updates since 2011 include:

Discussion of new DSM terminology. The new guidelines discuss changes in dementia diagnosis and diagnostic criteria reflected in the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition). In particular, the DSM-5 changed the term “dementia” to “major neurocognitive disorder,” and “mild cognitive impairment” to “minor neurocognitive disorder.” As was true with earlier nomenclature, providers and others amend these terms depending on the cause or causes of the disorder, for example, “major neurocognitive disorder due to traumatic brain injury.” That said, the terms “dementia” and “mild cognitive impairment” are still widely used in medicine and mental health care.

Discussion of new research guidelines. The new guidelines also discuss research advances in the field, in particular the use of biomarkers to detect various forms of dementia. Examples are the use of amyloid imaging—PET scans with a radio tracer that selectively binds to amyloid plaques—and analysis of amyloid and tau in cerebrospinal fluid. While these techniques are still mainly used in major academic medical centers, it is important for clinicians to know about them because they may eventually be used in clinical practice, said Bonnie Sachs, PhD, ABPP, an associate professor and neuropsychologist at Wake Forest University School of Medicine. “These developments change the way we think about things like Alzheimer’s disease, because they show there is a long preclinical asymptomatic phase before people start to show memory problems,” she said.

Monday, November 7, 2022

Neural processes in antecedent anxiety modulate risk-taking behavior

Nash, K., Leota, J., & Tran, A. (2021). 
Scientific Reports, 11.

Abstract

Though real-world decisions are often made in the shadow of economic uncertainties, work problems, relationship troubles, existential angst, etc., the neural processes involved in this common experience remain poorly understood. Here, we randomly assigned participants (N = 97) to either a poignant experience of forecasted economic anxiety or a no-anxiety control condition. Using electroencephalography (EEG), we then examined how source-localized, anxiety-specific neural activation modulated risky decision making and strategic behavior in the Balloon Analogue Risk Task (BART). Previous research demonstrates opposing effects of anxiety on risk-taking, leading to contrasting predictions. On the one hand, activity in the dorsomedial PFC/anterior cingulate cortex (ACC) and anterior insula, brain regions linked with anxiety and sensitivity to risk, should mediate the effect of economic anxiety on increased risk-averse decision-making. On the other hand, activation in the ventromedial PFC, a brain region important in emotion regulation and subjective valuation in decision-making, should mediate the effect of economic anxiety on increased risky decision-making. Results revealed evidence related to both predictions. Additionally, anxiety-specific activation in the dmPFC/ACC and the anterior insula were associated with disrupted learning across the task. These results shed light on the neurobiology of antecedent anxiety and risk-taking and provide potential insight into understanding how real-world anxieties can impact decision-making processes. 

Discussion

Rarely, in everyday life, must we make a series of decisions as anxious events fit in and out of awareness. Rather, we often face looming anxieties that spill over into the decisions we make. Here, we experimentally induced this real-world experience, in which we examined how antecedent anxiety and the accompanying neural processes modulated decision-making in a risk-taking task. Based on past research demonstrating that anxiety can have diverging effects on risk-taking, we formulated contrasting predictions. An anxious experience should modulate dmPFC/dACC and anterior insula activity, brain regions tightly linked with anxious worry, and this anxiety-specific activation should predict more risk-averse decisions in the BART. Alternatively, anxiety should modulate activation in the vmPFC, a brain region important in emotion regulation and decision-making and this anxiety-specific activation should then predict more risk-seeking decisions in the BART, through disrupted cognitive control or heightened sensitivity to reward.

We found evidence related to both predictions. On the one hand, right anterior insula activation specific to
antecedent anxiety predicted decreased risk-taking. This finding is consistent with considerable research on the neural mechanisms of risk and the limited prior research on incidental anxiety and decision-making. For example, the threat of shock during a decision-making task increased the anterior insula’s coding of negative evaluations and this activation predicted increased rejection rate of risky lottery decisions. For the first time, we extend these prior results to antecedent anxiety. The experience of economic anxiety is a poignant and difficult to regulate event. Presumably, right anterior insula activation caused by the economic anxiety manipulation sustained a more cautious approach to negative outcomes that trickled-down to risk-averse decision-making.

Friday, July 1, 2022

Tech firms are making computer chips with human cells – is it ethical?

J. Savulescu, C. Gyngell, & T. Sawai
The Conversation
Originally published 24 MAY 22

Here is an excerpt:

While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second.

This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres.

Companies do not need brain tissue samples from donors, but can simply grow the neurons they need in the lab from ordinary skin cells using stem cell technologies. Scientists can engineer cells from blood samples or skin biopsies into a type of stem cell that can then become any cell type in the human body.

However, this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?

People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?

If neural computers become common, we will grapple with other tissue donation issues. In Cortical Lab’s research with Dishbrain, they found human neurons were faster at learning than neurons from mice. Might there also be differences in performance depending on whose neurons are used? Might Apple and Google be able to make lightning-fast computers using neurons from our best and brightest today? Would someone be able to secure tissues from deceased genius’s like Albert Einstein to make specialised limited-edition neural computers?

Such questions are highly speculative but touch on broader themes of exploitation and compensation. Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.

Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?

Saturday, February 26, 2022

Experts Are Ringing Alarms About Elon Musk’s Brain Implants

Noah Kirsch
Daily Beast
Posted 25 Jan 2021

Here is an excerpt:

“These are very niche products—if we’re really only talking about developing them for paralyzed individuals—the market is small, the devices are expensive,” said Dr. L. Syd Johnson, an associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University.

“If the ultimate goal is to use the acquired brain data for other devices, or use these devices for other things—say, to drive cars, to drive Teslas—then there might be a much, much bigger market,” she said. “But then all those human research subjects—people with genuine needs—are being exploited and used in risky research for someone else’s commercial gain.”

In interviews with The Daily Beast, a number of scientists and academics expressed cautious hope that Neuralink will responsibly deliver a new therapy for patients, though each also outlined significant moral quandaries that Musk and company have yet to fully address.

Say, for instance, a clinical trial participant changes their mind and wants out of the study, or develops undesirable complications. “What I’ve seen in the field is we’re really good at implanting [the devices],” said Dr. Laura Cabrera, who researches neuroethics at Penn State. “But if something goes wrong, we really don't have the technology to explant them” and remove them safely without inflicting damage to the brain.

There are also concerns about “the rigor of the scrutiny” from the board that will oversee Neuralink’s trials, said Dr. Kreitmair, noting that some institutional review boards “have a track record of being maybe a little mired in conflicts of interest.” She hoped that the high-profile nature of Neuralink’s work will ensure that they have “a lot of their T’s crossed.”

The academics detailed additional unanswered questions: What happens if Neuralink goes bankrupt after patients already have devices in their brains? Who gets to control users’ brain activity data? What happens to that data if the company is sold, particularly to a foreign entity? How long will the implantable devices last, and will Neuralink cover upgrades for the study participants whether or not the trials succeed?

Dr. Johnson, of SUNY Upstate, questioned whether the startup’s scientific capabilities justify its hype. “If Neuralink is claiming that they’ll be able to use their device therapeutically to help disabled persons, they’re overpromising because they’re a long way from being able to do that.”

Neuralink did not respond to a request for comment as of publication time.

Saturday, February 19, 2022

Meta-analysis of human prediction error for incentives, perception, cognition, and action

Corlett, P.R., Mollick, J.A. & Kober, H.
Neuropsychopharmacol. (2022). 
https://doi.org/10.1038/s41386-021-01264-3

Abstract

Prediction errors (PEs) are a keystone for computational neuroscience. Their association with midbrain neural firing has been confirmed across species and has inspired the construction of artificial intelligence that can outperform humans. However, there is still much to learn. Here, we leverage the wealth of human PE data acquired in the functional neuroimaging setting in service of a deeper understanding, using an MKDA (multi-level kernel-based density) meta-analysis. Studies were identified with Google Scholar, and we included studies with healthy adult participants that reported activation coordinates corresponding to PEs published between 1999–2018. Across 264 PE studies that have focused on reward, punishment, action, cognition, and perception, consistent with domain-general theoretical models of prediction error we found midbrain PE signals during cognitive and reward learning tasks, and an insula PE signal for perceptual, social, cognitive, and reward prediction errors. There was evidence for domain-specific error signals––in the visual hierarchy during visual perception, and the dorsomedial prefrontal cortex during social inference. We assessed bias following prior neuroimaging meta-analyses and used family-wise error correction for multiple comparisons. This organization of computation by region will be invaluable in building and testing mechanistic models of cognitive function and dysfunction in machines, humans, and other animals. Limitations include small sample sizes and ROI masking in some included studies, which we addressed by weighting each study by sample size, and directly comparing whole brain vs. ROI-based results.

Discussion

There appeared to be regionally compartmentalized PEs for primary and secondary rewards. Primary rewards elicited PEs in the dorsal striatum and amygdala, while secondary reward PEs were in ventral striatum. This is consistent with the representational transition that occurs with learning. We also found separable PEs for valence domains: caudal regions of the caudate-putamen are involved in the learning of safety signals and avoidance learning, more anterior striatum is selective for rewards, while more posterior is selective for losses. We found posterior midbrain aversive PE, consistent with preclinical findings that dopamine neurons––which respond to negative valence––are located more posteriorly in the midbrain and project to medial prefrontal regions. Additionally, we found both appetitive and aversive PEs in the amygdala, consistent with animal studies. The presence of both appetitive and aversive PE signals in the amygdala is consistent with its expanding role regulating learning based on surprise and uncertainty rather than fear per se. 

Perhaps conspicuous in its absence, given preclinical work, is the hippocampus, which is often held to be a nexus for reward PE, memory PE, and perceptual PE. This may be because the hippocampus is constantly and commonly engaged throughout task performance. Its PEs may not be resolved by the sluggish BOLD response, which is based on local field potentials and may represent the projections into a region (and therefore the striatal PE signals we observed may be the culmination of the processing in CA1, CA3, and subiculum). Furthermore, we have only recently been able to image subfields of the hippocampus (with higher field strengths and more rapid sequences); as higher resolution PE papers accrue we will revisit the meta-analysis of PEs.

Tuesday, January 18, 2022

MIT Researchers Just Discovered an AI Mimicking the Brain on Its Own

Eric James Beyer
Interesting Engineering
Originally posted 18 DEC 21

Here is an excerpt:

In the wake of these successes, Martin began to wonder whether or not the same principle could be applied to higher-level cognitive functions like language processing. 

“I said, let’s just look at neural networks that are successful and see if they’re anything like the brain. My bet was that it would work, at least to some extent.”

To find out, Martin and colleagues compared data from 43 artificial neural network language models against fMRI and ECoG neural recordings taken while subjects listened to or read words as part of a text. The AI models the group surveyed covered all the major classes of available neural network approaches for language-based tasks. Some of them were more basic embedding models like GloVe, which clusters semantically similar words together in groups. Others, like the models known as GPT and BERT, were far more complex. These models are trained to predict the next word in a sequence or predict a missing word within a certain context, respectively. 

“The setup itself becomes quite simple,” Martin explains. “You just show the same stimuli to the models that you show to the subjects [...]. At the end of the day, you’re left with two matrices, and you test if those matrices are similar.”

And the results? 

“I think there are three-and-a-half major findings here,” Schrimpf says with a laugh. “I say ‘and a half’ because the last one we still don’t fully understand.”

Machine learning that mirrors the brain

The finding that sticks out to Martin most immediately is that some of the models predict neural data extremely well. In other words, regardless of how good a model was at performing a task, some of them appear to resemble the brain’s cognitive mechanics for language processing. Intriguingly, the team at MIT identified the GPT model variants as the most brain-like out of the group they looked at.

Sunday, November 14, 2021

A brain implant that zaps away negative thoughts

Nicole Karlis
Salon.com
Originally published 14 OCT 21

Here is an excerpt:

Still, the prospect of clinicians manipulating and redirecting one's thoughts, using electricity, raises potential ethical conundrums for researchers — and philosophical conundrums for patients. 

"A person implanted with a closed-loop system to target their depressive episodes could find themselves unable to experience some depressive phenomenology when it is perfectly normal to experience this outcome, such as a funeral," said Frederic Gilbert Ph.D. Senior Lecturer in Ethics at the University of Tasmania, in an email to Salon. "A system program to administer a therapeutic response when detecting a specific biomarker will not capture faithfully the appropriateness of some context; automated invasive systems implanted in the brain might constantly step up in your decision-making . . . as a result, it might compromise you as a freely thinking agent."

Gilbert added there is the potential for misuse — and that raises novel moral questions. 

"There are potential degrees of misuse of some of the neuro-data pumping out of the brain (some believe these neuro-data may be our hidden and secretive thoughts)," Gilbert said. "The possibility of biomarking neuronal activities with AI introduces the plausibility to identify a large range of future applications (e.g. predicting aggressive outburst, addictive impulse, etc). It raises questions about the moral, legal and medical obligations to prevent foreseeable and harmful behaviour."

For these reasons, Gilbert added, it's important "at all costs" to "keep human control in the loop," in both activation and control of one's own neuro-data. 

Saturday, October 23, 2021

Decision fatigue: Why it’s so hard to make up your mind these days, and how to make it easier

Stacy Colino
The Washington Post
Originally posted 22 Sept 21

Here is an excerpt:

Decision fatigue is more than just a feeling; it stems in part from changes in brain function. Research using functional magnetic resonance imaging has shown that there’s a sweet spot for brain function when it comes to making choices: When people were asked to choose from sets of six, 12 or 24 items, activity was highest in the striatum and the anterior cingulate cortex — both of which coordinate various aspects of cognition, including decision-making and impulse control — when the people faced 12 choices, which was perceived as “the right amount.”

Decision fatigue may make it harder to exercise self-control when it comes to eating, drinking, exercising or shopping. “Depleted people become more passive, which becomes bad for their decision-making,” says Roy Baumeister, a professor of psychology at the University of Queensland in Australia and author of  “Willpower: Rediscovering the Greatest Human Strength.” “They can be more impulsive. They may feel emotions more strongly. And they’re more susceptible to bias and more likely to postpone decision-making.”

In laboratory studies, researchers asked people to choose from an array of consumer goods or college course options or to simply think about the same options without making choices. They found that the choice-makers later experienced reduced self-control, including less physical stamina, greater procrastination and lower performance on tasks involving math calculations; the choice-contemplators didn’t experience these depletions.

Having insufficient information about the choices at hand may influence people’s susceptibility to decision fatigue. Experiencing high levels of stress and general fatigue can, too, Bufka says. And if you believe that the choices you make say something about who you are as a person, that can ratchet up the pressure, increasing your chances of being vulnerable to decision fatigue.

The suggestions include:

1. Sleep well
2. Make some choice automatic
3. Enlist a choice advisor
4. Given expectations a reality check
5. Pace yourself
6. Pay attention to feelings

Thursday, July 8, 2021

Free Will and Neuroscience: Decision Times and the Point of No Return

Alfred Mele
In Free Will, Causality, & Neuroscience
Chapter 4

Here are some excerpts:

Decisions to do things, as I conceive of them, are momentary actions of forming an intention to do them. For example, to decide to flex my right wrist now is to perform a (nonovert) action of forming an intention to flex it now (Mele 2003, ch. 9). I believe that Libet understands decisions in the same way. Some of our decisions and intentions are for the nonimmediate future and others are not. I have an intention today to fly to Brussels three days from now, and I have an intention now to click my “save” button now. The former intention is aimed at action three days in the future. The latter intention is about what to do now. I call intentions of these kinds, respectively, distal and proximal intentions (Mele 1992, pp. 143–44, 158, 2009, p. 10), and I make the same distinction in the sphere of decisions to act. Libet studies proximal intentions (or decisions or urges) in particular.

(cut)

Especially in the case of the study now under discussion, readers unfamiliar with Libet-style experiments may benefit from a short description of my own experience as a participant in such an experiment (see Mele 2009, pp. 34–36). I had just three things to do: watch a Libet clock with a view to keeping track of when I first became aware of something like a proximal urge, decision, or intention to flex; flex whenever I felt like it (many times over the course of the experiment); and report, after each flex, where I believed the hand was on the clock at the moment of first awareness. (I reported this belief by moving a cursor to a point on the clock. The clock was very fast; it made a complete revolution in about 2.5 seconds.) Because I did not experience any proximal urges, decisions, or intentions to flex, I hit on the strategy of saying “now!” silently to myself just before beginning to flex. This is the mental event that I tried to keep track of with the assistance of the clock. I thought of the “now!” as shorthand for the imperative “flex now!” – something that may be understood as an expression of a proximal decision to flex.

Why did I say “now!” exactly when I did? On any given trial, I had before me a string of equally good moments for a “now!” – saying, and I arbitrarily picked one of the moments. 3 But what led me to pick the moment I picked? The answer offered by Schurger et al. is that random noise crossed a decision threshold then. And they locate the time of the crossing very close to the onset of muscle activity – about 100 ms before it (pp. E2909, E2912). They write: “The reason we do not experience the urge to move as having happened earlier than about 200 ms before movement onset [referring to Libet’s partipants’ reported W time] is simply because, at that time, the neural decision to move (crossing the decision threshold) has not yet been made” (E2910). If they are right, this is very bad news for Libet. His claim is that, in his experiments, decisions are made well before the average reported W time: −200 ms. (In a Libet-style experiment conducted by Schurger et al., average reported W time is −150 ms [p. E2905].) As I noted, if relevant proximal decisions are not made before W, Libet’s argument for the claim that they are made unconsciously fails.

Thursday, February 4, 2021

Robust inference of positive selection on regulatory sequences in the human brain

J. Liu & M. Robison-Rechavi
Science Advances  27 Nov 2020:
Vol. 6, no. 48, eabc9863

Abstract

A longstanding hypothesis is that divergence between humans and chimpanzees might have been driven more by regulatory level adaptations than by protein sequence adaptations. This has especially been suggested for regulatory adaptations in the evolution of the human brain. We present a new method to detect positive selection on transcription factor binding sites on the basis of measuring predicted affinity change with a machine learning model of binding. Unlike other methods, this approach requires neither defining a priori neutral sites nor detecting accelerated evolution, thus removing major sources of bias. We scanned the signals of positive selection for CTCF binding sites in 29 human and 11 mouse tissues or cell types. We found that human brain–related cell types have the highest proportion of positive selection. This result is consistent with the view that adaptive evolution to gene regulation has played an important role in evolution of the human brain.

Summary:

With only 1 percent difference, the human and chimpanzee protein-coding genomes are remarkably similar. Understanding the biological features that make us human is part of a fascinating and intensely debated line of research. Researchers have developed a new approach to pinpoint adaptive human-specific changes in the way genes are regulated in the brain.

Friday, January 31, 2020

Strength of conviction won’t help to persuade when people disagree

Brain areaPressor
ucl.ac.uk
Originally poste 16 Dec 19

The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.

“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).

For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.

The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.

Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.

The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.

The info is here.

Tuesday, November 5, 2019

Moral Enhancement: A Realistic Approach

Greg Conan
British Medical Journal Blogs
Originally published August 29, 2019

Here is an excerpt:

If you could take a pill to make yourself a better person, would you do it? Could you justifiably make someone else do it, even if they do not want to?

When presented so simplistically, the idea might seem unrealistic or even impossible. The concepts of “taking a pill” and “becoming a better person” seem to belong to different categories. But many of the traits commonly considered to make one a “good person”—such as treating others fairly and kindly without violence—are psychological traits strongly influenced by neurobiology, and neurobiology c
an be changed using medicine. So when and how, if ever, should medicine be used to improve moral character?

Moral bioenhancement (MBE), the concept of improving moral character using biomedical technology, has fascinated me for years—especially once I learned that it has been hotly debated in the bioethics literature since 2008. I have greatly enjoyed diving into the literature to learn about how the concept has been analyzed and presented. Much of the debate has focused on its most abstract topics, like defining its terms and relating MBE to freedom. Although my fondness for analytic philosophy means that I cannot condemn anyone for working to examine ideas with maximum clarity and specificity, any MBE proponent who actually wants MBE to be implemented must focus on realistic methods.

The info is here.

Thursday, October 31, 2019

Scientists 'may have crossed ethical line' in growing human brains

Cross-section of a cerebral organoidIan Sample
The Guardian
Originally posted October 20, 2019

Neuroscientists may have crossed an “ethical rubicon” by growing lumps of human brain in the lab, and in some cases transplanting the tissue into animals, researchers warn.

The creation of mini-brains or brain “organoids” has become one of the hottest fields in modern neuroscience. The blobs of tissue are made from stem cells and, while they are only the size of a pea, some have developed spontaneous brain waves, similar to those seen in premature babies.

Many scientists believe that organoids have the potential to transform medicine by allowing them to probe the living brain like never before. But the work is controversial because it is unclear where it may cross the line into human experimentation.

On Monday, researchers will tell the world’s largest annual meeting of neuroscientists that some scientists working on organoids are “perilously close” to crossing the ethical line, while others may already have done so by creating sentient lumps of brain in the lab.

“If there’s even a possibility of the organoid being sentient, we could be crossing that line,” said Elan Ohayon, the director of the Green Neuroscience Laboratory in San Diego, California. “We don’t want people doing research where there is potential for something to suffer.”

The info is here.

Tuesday, October 29, 2019

Elon Musk's AI Project to Replicate the Human Brain Receives $1B from Microsoft

Anthony Cuthbertson
The Independent
Originally posted July 23, 2019

Microsoft has invested $1 billion in the Elon Musk-founded artificial intelligence venture that plans to mimic the human brain using computers.

OpenAI said the investment would go towards its efforts of building artificial general intelligence (AGI) that can rival and surpass the cognitive capabilities of humans.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said OpenAI CEO Sam Altman.

“Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.”

The two firms will jointly build AI supercomputing technologies, which OpenAI plans to commercialise through Microsoft and its Azure cloud computing business.

The info is here.

Sunday, October 27, 2019

Language Is the Scaffold of the Mind

Anna Ivanova
nautil.us
Originally posted September 26, 2019

Can you imagine a mind without language? More specifically, can you imagine your mind without language? Can you think, plan, or relate to other people if you lack words to help structure your experiences?

Many great thinkers have drawn a strong connection between language and the mind. Oscar Wilde called language “the parent, and not the child, of thought”; Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world”; and Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.”

After all, language is what makes us human, what lies at the root of our awareness, our intellect, our sense of self. Without it, we cannot plan, cannot communicate, cannot think. Or can we?

Imagine growing up without words. You live in a typical industrialized household, but you are somehow unable to learn the language of your parents. That means that you do not have access to education; you cannot properly communicate with your family other than through a set of idiosyncratic gestures; you never get properly exposed to abstract ideas such as “justice” or “global warming.” All you know comes from direct experience with the world.

It might seem that this scenario is purely hypothetical. There aren’t any cases of language deprivation in modern industrialized societies, right? It turns out there are. Many deaf children born into hearing families face exactly this issue. They cannot hear and, as a result, do not have access to their linguistic environment. Unless the parents learn sign language, the child’s language access will be delayed and, in some cases, missing completely.

The info is here.


Sunday, October 13, 2019

A Successful Artificial Memory Has Been Created

Robert Martone
A Successful Artificial Memory Has Been CreatedScientific American
Originally posted August27, 2019

Here is the conclusion:

There are legitimate motives underlying these efforts. Memory has been called “the scribe of the soul,” and it is the source of one’s personal history. Some people may seek to recover lost or partially lost memories. Others, such as those afflicted with post-traumatic stress disorder or chronic pain, might seek relief from traumatic memories by trying to erase them.

The methods used here to create artificial memories will not be employed in humans anytime soon: none of us are transgenic like the animals used in the experiment, nor are we likely to accept multiple implanted fiber-optic cables and viral injections. Nevertheless, as technologies and strategies evolve, the possibility of manipulating human memories becomes all the more real. And the involvement of military agencies such as DARPA invariably renders the motivations behind these efforts suspect. Are there things we all need to be afraid of or that we must or must not do? The dystopian possibilities are obvious.

Creating artificial memories brings us closer to learning how memories form and could ultimately help us understand and treat dreadful diseases such as Alzheimer’s. Memories, however, cut to the core of our humanity, and we need to be vigilant that any manipulations are approached ethically.

The info is here.

Sunday, September 29, 2019

The brain, the criminal and the courts

A graph shows the number of mentions of neuroscience in judicial opinions in US cases from 2005 to 2015. Capital and noncapital homicides are shown, as well as other felonies. For the three categories added together, the authors found 101 mentions in 2005 and more than 400 in 2015. All three categories show growth.Eryn Brown
knowablemagazine.org
Originally posted August 30, 2019

Here is an excerpt:

It remains to be seen if all this research will yield actionable results. In 2018, Hoffman, who has been a leader in neurolaw research, wrote a paper discussing potential breakthroughs and dividing them into three categories: near term, long term and “never happening.” He predicted that neuroscientists are likely to improve existing tools for chronic pain detection in the near future, and in the next 10 to 50 years he believes they’ll reliably be able to detect memories and lies, and to determine brain maturity.

But brain science will never gain a full understanding of addiction, he suggested, or lead courts to abandon notions of responsibility or free will (a prospect that gives many philosophers and legal scholars pause).

Many realize that no matter how good neuroscientists get at teasing out the links between brain biology and human behavior, applying neuroscientific evidence to the law will always be tricky. One concern is that brain studies ordered after the fact may not shed light on a defendant’s motivations and behavior at the time a crime was committed — which is what matters in court. Another concern is that studies of how an average brain works do not always provide reliable information on how a specific individual’s brain works.

“The most important question is whether the evidence is legally relevant. That is, does it help answer a precise legal question?” says Stephen J. Morse, a scholar of law and psychiatry at the University of Pennsylvania. He is in the camp who believe that neuroscience will never revolutionize the law, because “actions speak louder than images,” and that in a legal setting, “if there is a disjunct between what the neuroscience shows and what the behavior shows, you’ve got to believe the behavior.” He worries about the prospect of “neurohype,” and attorneys who overstate the scientific evidence.

The info is here.