Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Brain Science. Show all posts
Showing posts with label Brain Science. Show all posts

Sunday, September 15, 2019

To Study the Brain, a Doctor Puts Himself Under the Knife

Adam Piore
MIT Technology Review
Originally published November 9, 2015

Here are two excerpts:

Kennedy became convinced that the way to take his research to the next level was to find a volunteer who could still speak. For almost a year he searched for a volunteer with ALS who still retained some vocal abilities, hoping to take the patient offshore for surgery. “I couldn’t get one. So after much thinking and pondering I decided to do it on myself,” he says. “I tried to talk myself out of it for years.”

The surgery took place in June 2014 at a 13-bed Belize City hospital a thousand miles south of his Georgia-based neurology practice and also far from the reach of the FDA. Prior to boarding his flight, Kennedy did all he could to prepare. At his small company, Neural Signals, he fabricated the electrodes the neurosurgeon would implant into his motor cortex—even chose the spot where he wanted them buried. He put aside enough money to support himself for a few months if the surgery went wrong. He had made sure his living will was in order and that his older son knew where he was.

(cut)

To some researchers, Kennedy’s decisions could be seen as unwise, even unethical. Yet there are cases where self-experiments have paid off. In 1984, an Australian doctor named Barry Marshall drank a beaker filled with bacteria in order to prove they caused stomach ulcers. He later won the Nobel Prize. “There’s been a long tradition of medical scientists experimenting on themselves, sometimes with good results and sometimes without such good results,” says Jonathan Wolpaw, a brain-computer interface researcher at the Wadsworth Center in New York. “It’s in that tradition. That’s probably all I should say without more information.”

The info is here.


Saturday, August 31, 2019

Unraveling the Ethics of New Neurotechnologies

Nicholas Weiler
www.ucsf.edu
Originally posted July 30, 2019

Here is an excerpt:

“In unearthing these ethical issues, we try as much as possible to get out of our armchairs and actually observe how people are interacting with these new technologies. We interview everyone from patients and family members to clinicians and researchers,” Chiong said. “We also work with philosophers, lawyers, and others with experience in biomedicine, as well as anthropologists, sociologists and others who can help us understand the clinical challenges people are actually facing as well as their concerns about new technologies.”

Some of the top issues on Chiong’s mind include ensuring patients understand how the data recorded from their brains are being used by researchers; protecting the privacy of this data; and determining what kind of control patients will ultimately have over their brain data.

“As with all technology, ethical questions about neurotechnology are embedded not just in the technology or science itself, but also the social structure in which the technology is used” Chiong added. “These questions are not just the domain of scientists, engineers, or even professional ethicists, but are part of larger societal conversation we’re beginning to have about the appropriate applications of technology, and personal data, and when it's important for people to be able to opt out or say no.”

The info is here.

Saturday, August 24, 2019

Decoding the neuroscience of consciousness

Emily Sohn
Nature.com
Originally published July 24, 2019

Here is an excerpt:

That disconnect might also offer insight into why current medications for anxiety do not always work as well as people hope, LeDoux says. Developed through animal studies, these medications might target circuits in the amygdala and affect a person’s behaviours, such as their level of timidity — making it easier for them to go to social events. But such drugs don’t necessarily affect the conscious experience of fear, which suggests that future treatments might need to address both unconscious and conscious processes separately. “We can take a brain-based approach that sees these different kinds of symptoms as products of different circuits, and design therapies that target the different circuits systematically,” he says. “Turning down the volume doesn’t change the song — only its level.”

Psychiatric disorders are another area of interest for consciousness researchers, Lau says, on the basis that some mental-health conditions, including schizophrenia, obsessive–compulsive disorder and depression, might be caused by problems at the unconscious level — or even by conflicts between conscious and unconscious pathways. The link is only hypothetical so far, but Seth has been probing the neural basis of hallucinations with a ‘hallucination machine’ — a virtual-reality program that uses machine learning to simulate visual hallucinatory experiences in people with healthy brains. Through experiments, he and his colleagues have shown that these hallucinations resemble the types of visions that people experience while taking psychedelic drugs, which have increasingly been used as a tool to investigate the neural underpinnings of consciousness.

If researchers can uncover the mechanisms behind hallucinations, they might be able to manipulate the relevant areas of the brain and, in turn, treat the underlying cause of psychosis — rather than just address the symptoms. By demonstrating how easy it is to manipulate people’s perceptions, Seth adds, the work suggests that our sense of reality is just another facet of how we experience the world.

The info is here.

Tuesday, August 20, 2019

Can Neuroscience Understand Free Will?

Brian Gallagher
nautil.us
Originally posted on July 19, 2019

Here is an excerpt:

Clinical neuroscientists and neurologists have identified the brain networks responsible for this sense of free will. There seems to be two: the network governing the desire to act, and the network governing the feeling of responsibility for acting. Brain-damaged patients show that these can come apart—you can have one without the other.

Lacking essentially all motivation to move or speak has a name: akinetic mutism. The researchers, lead by neurologists Michael Fox, of Harvard Medical School, and Ryan Darby, of Vanderbilt University, analyzed 28 cases of this condition, not all of them involving damage in the same departments. “We found that brain lesions that disrupt volition occur in many different locations, but fall within a single brain network, defined by connectivity to the anterior cingulate,” which has links to both the “emotional” limbic system and the “cognitive” prefrontal cortex, the researchers wrote. Feeling like you’re moving under the direction of outside forces has a name, too: alien limb syndrome. The researchers analyzed 50 cases of this condition, which again involved brain damage in different spots. “Lesions that disrupt agency also occur in many different locations, but fall within a separate network, defined by connectivity to the precuneus,” which is involved, among other things, in the experience of agency.

The results may not map onto “free will” as we understand it ethically—the ability to choose between right and wrong. “It remains unknown whether the network of brain regions we identify as related to free will for movements is the same as those important for moral decision-making, as prior studies have suggested important differences,” the researchers wrote. For instance, in a 2017 study, he and Darby analyzed many cases of brain lesions in various regions predisposing people to criminal behavior, and found that “these lesions all fall within a unique functionally connected brain network involved in moral decision making.”

The info is here.

Friday, August 9, 2019

The Human Brain Project Hasn’t Lived Up to Its Promise

Ed Yong
www.theatlantic.com
Originally published July 22, 2019

Here is an excerpt:

Markram explained that, contra his TED Talk, he had never intended for the simulation to do much of anything. He wasn’t out to make an artificial intelligence, or beat a Turing test. Instead, he pitched it as an experimental test bed—a way for scientists to test their hypotheses without having to prod an animal’s head. “That would be incredibly valuable,” Lindsay says, but it’s based on circular logic. A simulation might well allow researchers to test ideas about the brain, but those ideas would already have to be very advanced to pull off the simulation in the first place. “Once neuroscience is ‘finished,’ we should be able to do it, but to have it as an intermediate step along the way seems difficult.”

“It’s not obvious to me what the very large-scale nature of the simulation would accomplish,” adds Anne Churchland from Cold Spring Harbor Laboratory. Her team, for example, simulates networks of neurons to study how brains combine visual and auditory information. “I could implement that with hundreds of thousands of neurons, and it’s not clear what it would buy me if I had 70 billion.”

In a recent paper titled “The Scientific Case for Brain Simulations,” several HBP scientists argued that big simulations “will likely be indispensable for bridging the scales between the neuron and system levels in the brain.” In other words: Scientists can look at the nuts and bolts of how neurons work, and they can study the behavior of entire organisms, but they need simulations to show how the former create the latter. The paper’s authors drew a comparison to weather forecasts, in which an understanding of physics and chemistry at the scale of neighborhoods allows us to accurately predict temperature, rainfall, and wind across the whole globe.

The info is here.

Wednesday, June 26, 2019

The computational and neural substrates of moral strategies in social decision-making

Jeroen M. van Baar, Luke J. Chang & Alan G. Sanfey
Nature Communications, Volume 10, Article number: 1483 (2019)

Abstract

Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.

(cut)

From the Discussion

We also report a new strategy observed in participants, moral opportunism. This group did not consistently apply one moral rule to their decisions, but rather appeared to make a motivational trade-off depending on the particular trial structure. This opportunistic decision strategy entailed switching between the behavioral patterns of guilt aversion and inequity aversion, and allowed participants to maximize their financial payoff while still always following a moral rule. Although it could have been the case that these opportunists merely resembled GA and IA in terms of decision outcome, and not in the underlying psychological process, a confirmatory analysis showed that the moral opportunists did in fact switch between the neural representations of guilt and inequity aversion, and thus flexibly employed the respective psychological processes underlying these two, quite different, social preferences. This further supports our interpretation that the activity patterns directly reflect guilt aversion and inequity aversion computations, and not a theoretically peripheral “third factor” shared between GA or IA participants. Additionally, we found activity patterns specifically linked to moral opportunism in the superior parietal cortex and dACC, which are strongly associated with cognitive control and working memory.

The research is here.

Saturday, June 1, 2019

Does It Matter Whether You or Your Brain Did It?

Uri Maoz, K. R. Sita, J. J. A. van Boxtel, and L. Mudrik
Front. Psychol., 30 April 2019
https://doi.org/10.3389/fpsyg.2019.00950

Abstract

Despite progress in cognitive neuroscience, we are still far from understanding the relations between the brain and the conscious self. We previously suggested that some neuroscientific texts that attempt to clarify these relations may in fact make them more difficult to understand. Such texts—ranging from popular science to high-impact scientific publications—position the brain and the conscious self as two independent, interacting subjects, capable of possessing opposite psychological states. We termed such writing ‘Double Subject Fallacy’ (DSF). We further suggested that such DSF language, besides being conceptually confusing and reflecting dualistic intuitions, might affect people’s conceptions of moral responsibility, lessening the perception of guilt over actions. Here, we empirically investigated this proposition with a series of three experiments (pilot and two preregistered replications). Subjects were presented with moral scenarios where the defendant was either (1) clearly guilty, (2) ambiguous, or (3) clearly innocent while the accompanying neuroscientific evidence about the defendant was presented using DSF or non-DSF language. Subjects were instructed to rate the defendant’s guilt in all experiments. Subjects rated the defendant in the clearly guilty scenario as guiltier than in the two other scenarios and the defendant in the ambiguously described scenario as guiltier than in the innocent scenario, as expected. In Experiment 1 (N = 609), an effect was further found for DSF language in the expected direction: subjects rated the defendant less guilty when the neuroscientific evidence was described using DSF language, across all levels of culpability. However, this effect did not replicate in Experiment 2 (N = 1794), which focused on different moral scenario, nor in Experiment 3 (N = 1810), which was an exact replication of Experiment 1. Bayesian analyses yielded strong evidence against the existence of an effect of DSF language on the perception of guilt. Our results thus challenge the claim that DSF language affects subjects’ moral judgments. They further demonstrate the importance of good scientific practice, including preregistration and—most critically—replication, to avoid reaching erroneous conclusions based on false-positive results.

Sunday, May 26, 2019

Brain science should be making prisons better, not trying to prove innocence

Arielle Baskin-Sommers
theconversaton.com
Originally posted November 1, 2017

Here is an excerpt:

Unfortunately, when neuroscientific assessments are presented to the court, they can sway juries, regardless of their relevance. Using these techniques to produce expert evidence doesn’t bring the court any closer to truth or justice. And with a single brain scan costing thousands of dollars, plus expert interpretation and testimony, it’s an expensive tool out of reach for many defendants. Rather than helping untangle legal responsibility, neuroscience here causes an even deeper divide between the rich and the poor, based on pseudoscience.

While I remain skeptical about the use of neuroscience in the judicial process, there are a number of places where its findings could help corrections systems develop policies and practices based on evidence.

Solitary confinement harms more than helps

Take, for instance, the use within prisons of solitary confinement as a punishment for disciplinary infractions. In 2015, the Bureau of Justice reported that nearly 20 percent of federal and state prisoners and 18 percent of local jail inmates spent time in solitary.

Research consistently demonstrates that time spent in solitary increases the chances of persistent emotional trauma and distress. Solitary can lead to hallucinations, fantasies and paranoia; it can increase anxiety, depression and apathy as well as difficulties in thinking, concentrating, remembering, paying attention and controlling impulses. People placed in solitary are more likely to engage in self-mutilation as well as exhibit chronic rage, anger and irritability. The term “isolation syndrome” has even been coined to capture the severe and long-lasting effects of solitary.

The info is here.

Monday, May 6, 2019

How do we make moral decisions?

Dartmouth College
Press Release
Originally released April 18, 2019

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

The info is here.

The research is here.

Monday, April 22, 2019

Moral identity relates to the neural processing of third-party moral behavior

Carolina Pletti, Jean Decety, & Markus Paulus
Social Cognitive and Affective Neuroscience
https://doi.org/10.1093/scan/nsz016

Abstract

Moral identity, or moral self, is the degree to which being moral is important to a person’s self-concept. It is hypothesized to be the “missing link” between moral judgment and moral action. However, its cognitive and psychophysiological mechanisms are still subject to debate. In this study, we used Event-Related Potentials (ERPs) to examine whether the moral self concept is related to how people process prosocial and antisocial actions. To this end, participants’ implicit and explicit moral self-concept was assessed. We examined whether individual differences in moral identity relate to differences in early, automatic processes (i.e. EPN, N2) or late, cognitively controlled processes (i.e. LPP) while observing prosocial and antisocial situations. Results show that a higher implicit moral self was related to a lower EPN amplitude for prosocial scenarios. In addition, an enhanced explicit moral self was related to a lower N2 amplitude for prosocial scenarios. The findings demonstrate that the moral self affects the neural processing of morally relevant stimuli during third-party evaluations. They support theoretical considerations that the moral self already affects (early) processing of moral information.

Here is the conclusion:

Taken together, notwithstanding some limitations, this study provides novel insights into the
nature of the moral self. Importantly, the results suggest that the moral self concept influences the
early processing of morally relevant contexts. Moreover, the implicit and the explicit moral self
concepts have different neural correlates, influencing respectively early and intermediate processing
stages. Overall, the findings inform theoretical approaches on how the moral self informs social
information processing (Lapsley & Narvaez, 2004).

Monday, April 1, 2019

Neuroscience Readies for a Showdown Over Consciousness Ideas

Philip Ball
Quanta Magazine
Originally published March 6, 2019

Here is an excerpt:

Philosophers have debated the nature of consciousness and whether it can inhere in things other than humans for thousands of years, but in the modern era, pressing practical and moral implications make the need for answers more urgent. As artificial intelligence (AI) grows increasingly sophisticated, it might become impossible to tell whether one is dealing with a machine or a human  merely by interacting with it — the classic Turing test. But would that mean AI deserves moral consideration?

Understanding consciousness also impinges on animal rights and welfare, and on a wide range of medical and legal questions about mental impairments. A group of more than 50 leading neuroscientists, psychologists, cognitive scientists and others recently called for greater recognition of the importance of research on this difficult subject. “Theories of consciousness need to be tested rigorously and revised repeatedly amid the long process of accumulation of empirical evidence,” the authors said, adding that “myths and speculative conjectures also need to be identified as such.”

You can hardly do experiments on consciousness without having first defined it. But that’s already difficult because we use the word in several ways. Humans are conscious beings, but we can lose consciousness, for example under anesthesia. We can say we are conscious of something — a strange noise coming out of our laptop, say. But in general, the quality of consciousness refers to a capacity to experience one’s existence rather than just recording it or responding to stimuli like an automaton. Philosophers of mind often refer to this as the principle that one can meaningfully speak about what it is to be “like” a conscious being — even if we can never actually have that experience beyond ourselves.

The info is here.

Monday, January 28, 2019

Artificial intelligence turns brain activity into speech

Kelly Servick
ScienceMag.org
Originally published January 2, 2019

Here is an excerpt:

Finally, neurosurgeon Edward Chang and his team at the University of California, San Francisco, reconstructed entire sentences from brain activity captured from speech and motor areas while three epilepsy patients read aloud. In an online test, 166 people heard one of the sentences and had to select it from among 10 written choices. Some sentences were correctly identified more than 80% of the time. The researchers also pushed the model further: They used it to re-create sentences from data recorded while people silently mouthed words. That's an important result, Herff says—"one step closer to the speech prosthesis that we all have in mind."

However, "What we're really waiting for is how [these methods] are going to do when the patients can't speak," says Stephanie Riès, a neuroscientist at San Diego State University in California who studies language production. The brain signals when a person silently "speaks" or "hears" their voice in their head aren't identical to signals of speech or hearing. Without external sound to match to brain activity, it may be hard for a computer even to sort out where inner speech starts and ends.

Decoding imagined speech will require "a huge jump," says Gerwin Schalk, a neuroengineer at the National Center for Adaptive Neurotechnologies at the New York State Department of Health in Albany. "It's really unclear how to do that at all."

One approach, Herff says, might be to give feedback to the user of the brain-computer interface: If they can hear the computer's speech interpretation in real time, they may be able to adjust their thoughts to get the result they want. With enough training of both users and neural networks, brain and computer might meet in the middle.

The info is here.

Friday, January 25, 2019

Decision-Making and Self-Governing Systems

Adina L. Roskies
Neuroethics
October 2018, Volume 11, Issue 3, pp 245–257

Abstract

Neuroscience has illuminated the neural basis of decision-making, providing evidence that supports specific models of decision-processes. These models typically are quite mechanical, the realization of abstract mathematical “diffusion to bound” models. While effective decision-making seems to be essential for sophisticated behavior, central to an account of freedom, and a necessary characteristic of self-governing systems, it is not clear how the simple models neuroscience inspires can underlie the notion of self-governance. Drawing from both philosophy and neuroscience I explore ways in which the proposed decision-making architectures can play a role in systems that can reasonably be thought of as “self-governing”.

Here is an excerpt:

The importance of prospection for self-governance cannot be underestimated. One example in which it promises to play an important role is in the exercise of and failures of self-control. Philosophers have long been puzzled by the apparent possibility of akrasia or weakness of will: choosing to act in ways that one judges not to be in one’s best interest. Weakness of will is thought to be an example of irrational choice. If one’s theory of choice is that one always decides to pursue the option that has the highest value, and that it is rational to choose what one most values, it is hard to explain irrational choices. Apparent cases of weakness of will would really be cases of mistaken valuation: overvaluing an option that is in fact not the most valuable option. And indeed, if one cannot rationally criticize the strength of desires (see Hume’s famous observation that “it is not against reason that I should prefer the destruction of half the world to the pricking of my little finger”), we cannot explain irrational choice.

The article is here.

Friday, November 23, 2018

The Moral Law Within: The Scientific Case For Self-Governance

Carsten Tams
Forbes.com
Originally posted September 26, 2018

Here is an excerpt:

The behavioral ethics literature, and its reception in the ethics and compliance field, is following a similar trend. Behavioral ethics is often defined as the discipline that helps to explain why good people do bad things. It frequently focuses on how various biases, cognitive heuristics, blind spots, ethical fading, bounded ethicality, or rationalizations compromise people’s ethical intentions.

To avoid misunderstandings, I am a fan and avid consumer of behavioral science literature. Understanding unethical biases is fascinating and raising awareness about them is useful. But it is only half the story. There is more to behavioral science than biases and fallacies. A lopsided focus on biases may lead us to view people’s morality as hopelessly flawed. Standing amidst a forest crowded by biases and fallacies, we may forget that people often judge and act morally.

Such an anthropological bias has programmatic consequences. If we frame organizational ethics simply as a problem of people’s ethical biases, we will focus on keeping these negative biases in check. This framing, however, does not provide a rationale for supporting people’s capacity for self-governed ethical behavior. For such a rationale, we would need evidence that such a capacity exists. The human capacity for morality has been a subject of rigorous inquiry across diverse behavioral disciplines. In the following, this article will highlight a selection of major contributions to this inquiry.

The info is here.

Monday, September 24, 2018

Distinct Brain Areas involved in Anger versus Punishment during Social Interactions

Olga M. Klimecki, David Sander & Patrik Vuilleumier
Scientific Reports volume 8, Article number: 10556 (2018)

Abstract

Although anger and aggression can have wide-ranging consequences for social interactions, there is sparse knowledge as to which brain activations underlie the feelings of anger and the regulation of related punishment behaviors. To address these issues, we studied brain activity while participants played an economic interaction paradigm called Inequality Game (IG). The current study confirms that the IG elicits anger through the competitive behavior of an unfair (versus fair) other and promotes punishment behavior. Critically, when participants see the face of the unfair other, self-reported anger is parametrically related to activations in temporal areas and amygdala – regions typically associated with mentalizing and emotion processing, respectively. During anger provocation, activations in the dorsolateral prefrontal cortex, an area important for regulating emotions, predicted the inhibition of later punishment behavior. When participants subsequently engaged in behavioral decisions for the unfair versus fair other, increased activations were observed in regions involved in behavioral adjustment and social cognition, comprising posterior cingulate cortex, temporal cortex, and precuneus. These data point to a distinction of brain activations related to angry feelings and the control of subsequent behavioral choices. Furthermore, they show a contribution of prefrontal control mechanisms during anger provocation to the inhibition of later punishment.

The research is here.

Sunday, August 5, 2018

How Do Expectations Shape Perception?

Floris P. de Lange, Micha Heilbron, & Peter Kok
Trends in Cognitive Sciences
Available online 29 June 2018

Abstract

Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.

Highlights

  • Expectations play a strong role in determining the way we perceive the world.
  • Prior expectations can originate from multiple sources of information, and correspondingly have different neural sources, depending on where in the brain the relevant prior knowledge is stored.
  • Recent findings from both human neuroimaging and animal electrophysiology have revealed that prior expectations can modulate sensory processing at both early and late stages, and both before and after stimulus onset. The response modulation can take the form of either dampening the sensory representation or enhancing it via a process of sharpening.
  • Theoretical computational frameworks of neural sensory processing aim to explain how the probabilistic integration of prior expectations and sensory inputs results in perception.

Wednesday, August 1, 2018

Why our brains see the world as ‘us’ versus ‘them’

Leslie Henderson
The Conversation
Originally posted June 2018

Here is an excerpt:

As opposed to fear, distrust and anxiety, circuits of neurons in brain regions called the mesolimbic system are critical mediators of our sense of “reward.” These neurons control the release of the transmitter dopamine, which is associated with an enhanced sense of pleasure. The addictive nature of some drugs, as well as pathological gaming and gambling, are correlated with increased dopamine in mesolimbic circuits.

In addition to dopamine itself, neurochemicals such as oxytocin can significantly alter the sense of reward and pleasure, especially in relationship to social interactions, by modulating these mesolimbic circuits.

Methodological variations indicate further study is needed to fully understand the roles of these signaling pathways in people. That caveat acknowledged, there is much we can learn from the complex social interactions of other mammals.

The neural circuits that govern social behavior and reward arose early in vertebrate evolution and are present in birds, reptiles, bony fishes and amphibians, as well as mammals. So while there is not a lot of information on reward pathway activity in people during in-group versus out-group social situations, there are some tantalizing results from  studies on other mammals.

The article is here.

Sunday, May 27, 2018

​The Ethics of Neuroscience - A Different Lens



New technologies are allowing us to have control over the human brain like never before. As we push the possibilities we must ask ourselves, what is neuroscience today and how far is too far?

The world’s best neurosurgeons can now provide treatments for things that were previously untreatable, such as Parkinson’s and clinical depression. Many patients are cured, while others develop side effects such as erratic behaviour and changes in their personality. 

Not only do we have greater understanding of clinical psychology, forensic psychology and criminal psychology, we also have more control. Professional athletes and gamers are now using this technology – some of it untested – to improve performance. However, with these amazing possibilities come great ethical concerns.

This manipulation of the brain has far-reaching effects, impacting the law, marketing, health industries and beyond. We need to investigate the capabilities of neuroscience and ask the ethical questions that will determine how far we can push the science of mind and behaviour.

Friday, May 25, 2018

The $3-Million Research Breakdown

Jodi Cohen
www.propublica.org
Originally published April 26, 2018

Here is an excerpt:

In December, the university quietly paid a severe penalty for Pavuluri’s misconduct and its own lax oversight, after the National Institute of Mental Health demanded weeks earlier that the public institution — which has struggled with declining state funding — repay all $3.1 million it had received for Pavuluri’s study.

In issuing the rare rebuke, federal officials concluded that Pavuluri’s “serious and continuing noncompliance” with rules to protect human subjects violated the terms of the grant. NIMH said she had “increased risk to the study subjects” and made any outcomes scientifically meaningless, according to documents obtained by ProPublica Illinois.

Pavuluri’s research is also under investigation by two offices in the U.S. Department of Health and Human Services: the inspector general’s office, which examines waste, fraud and abuse in government programs, according to subpoenas obtained by ProPublica Illinois, and the Office of Research Integrity, according to university officials.

The article is here.

Wednesday, May 23, 2018

Growing brains in labs: why it's time for an ethical debate

Ian Sample
The Guardian
Originally published April 24, 2018

Here is an excerpt:

The call for debate has been prompted by a raft of studies in which scientists have made “brain organoids”, or lumps of human brain from stem cells; grown bits of human brain in rodents; and kept slivers of human brain alive for weeks after surgeons have removed the tissue from patients. Though it does not indicate consciousness, in one case, scientists recorded a surge of electrical activity from a ball of brain and retinal cells when they shined a light on it.

The research is driven by a need to understand how the brain works and how it fails in neurological disorders and mental illness. Brain organoids have already been used to study autism spectrum disorders, schizophrenia and the unusually small brain size seen in some babies infected with Zika virus in the womb.

“This research is essential to alleviate human suffering. It would be unethical to halt the work,” said Nita Farahany, professor of law and philosophy at Duke University in North Carolina. “What we want is a discussion about how to enable responsible progress in the field.”

The article is here.