Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Brain Science. Show all posts
Showing posts with label Brain Science. Show all posts

Friday, April 27, 2018

The Mind-Expanding Ideas of Andy Clark

Larissa MacFarquhar
The New Yorker
Originally published April 2, 2018

Here is an excerpt:

Cognitive science addresses philosophical questions—What is a mind? What is the mind’s relationship to the body? How do we perceive and make sense of the outside world?—but through empirical research rather than through reasoning alone. Clark was drawn to it because he’s not the sort of philosopher who just stays in his office and contemplates; he likes to visit labs and think about experiments. He doesn’t conduct experiments himself; he sees his role as gathering ideas from different places and coming up with a larger theoretical framework in which they all fit together. In physics, there are both experimental and theoretical physicists, but there are fewer theoretical neuroscientists or psychologists—you have to do experiments, for the most part, or you can’t get a job. So in cognitive science this is a role that philosophers can play.

Most people, he realizes, tend to identify their selves with their conscious minds. That’s reasonable enough; after all, that is the self they know about. But there is so much more to cognition than that: the vast, silent cavern of underground mental machinery, with its tubes and synapses and electric impulses, so many unconscious systems and connections and tricks and deeply grooved pathways that form the pulsing substrate of the self. It is those primal mechanisms, the wiring and plumbing of cognition, that he has spent most of his career investigating. When you think about all that fundamental stuff—some ancient and shared with other mammals and distant ancestors, some idiosyncratic and new—consciousness can seem like a merely surface phenomenon, a user interface that obscures the real works below.

The article and audio file are here.

Tuesday, March 27, 2018

"My Brain Made Me Do It" Is Becoming a More Common Criminal Defense

Dina Fine Maron
Scientific American
Originally published March 5, 2018

Here is an excerpt:

But experts looking back at the 2007 case now say Hodges was part of a burgeoning trend: Criminal defense strategies are increasingly relying on neurological evidence—psychological evaluations, behavioral tests or brain scans—to potentially mitigate punishment. Defendants may cite earlier head traumas or brain disorders as underlying reasons for their behavior, hoping this will be factored into a court’s decisions. Such defenses have been employed for decades, mostly in death penalty cases. But as science has evolved in recent years, the practice has become more common in criminal cases ranging from drug offenses to robberies.

“The number of cases in which people try to introduce neurotechnological evidence in the trial or sentencing phase has gone up by leaps and bounds,” says Joshua Sanes, director of the Center for Brain Science at Harvard University. But such attempts may be outpacing the scientific evidence behind the technology, he adds.

“In 2012 alone over 250 judicial opinions—more than double the number in 2007—cited defendants arguing in some form or another that their ‘brains made them do it,’” according to an analysis by Nita Farahany, a law professor and director of Duke University’s Initiative for Science and Society. More recently, she says, that number has climbed to around 420 each year.

The article is here.

Tuesday, March 13, 2018

Cognitive Ability and Vulnerability to Fake News

David Z. Hambrick and Madeline Marquardt
Scientific American
Originally posted on February 6, 2018

“Fake news” is Donald Trump’s favorite catchphrase. Since the election, it has appeared in some 180 tweets by the President, decrying everything from accusations of sexual assault against him to the Russian collusion investigation to reports that he watches up to eight hours of television a day. Trump may just use “fake news” as a rhetorical device to discredit stories he doesn’t like, but there is evidence that real fake news is a serious problem. As one alarming example, an analysis by the internet media company Buzzfeed revealed that during the final three months of the 2016 U.S. presidential campaign, the 20 most popular false election stories generated around 1.3 million more Facebook engagements—shares, reactions, and comments—than did the 20 most popular legitimate stories. The most popular fake story was “Pope Francis Shocks World, Endorses Donald Trump for President.”

Fake news can distort people’s beliefs even after being debunked. For example, repeated over and over, a story such as the one about the Pope endorsing Trump can create a glow around a political candidate that persists long after the story is exposed as fake. A study recently published in the journal Intelligence suggests that some people may have an especially difficult time rejecting misinformation.

The article is here.

Sunday, March 4, 2018

Increasing honesty in humans with noninvasive brain stimulation

Michel André Maréchal, Alain Cohn, Giuseppe Ugazio and Christian C. Ruff
Proceedings of the National Academy of Sciences (PNAS)
April, 114(17), 4360-4364

Abstract

Honesty plays a key role in social and economic interactions and is crucial for societal functioning. However, breaches of honesty are pervasive and cause significant societal and economic problems that can affect entire nations. Despite its importance, remarkably little is known about the neurobiological mechanisms supporting honest behavior. We demonstrate that honesty can be increased in humans with transcranial direct current stimulation (tDCS) over the right dorsolateral prefrontal cortex. Participants (n = 145) completed a die-rolling task where they could misreport their outcomes to increase their earnings, thereby pitting honest behavior against personal financial gain. Cheating was substantial in a control condition but decreased dramatically when neural excitability was enhanced with tDCS. This increase in honesty could not be explained by changes in material self-interest or moral beliefs and was dissociated from participants’ impulsivity, willingness to take risks, and mood. A follow-up experiment (n = 156) showed that tDCS only reduced cheating when dishonest behavior benefited the participants themselves rather than another person, suggesting that the stimulated neural process specifically resolves conflicts between honesty and material self-interest. Our results demonstrate that honesty can be strengthened by noninvasive interventions and concur with theories proposing that the human brain has evolved mechanisms dedicated to control complex social behaviors.

The article is here.

Tuesday, February 27, 2018

Artificial neurons compute faster than the human brain

Sara Reardon
Nature
Originally published January 26, 2018

Superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain. That achievement, described in Science Advances on 26 January, is a key benchmark in the development of advanced computing devices designed to mimic biological systems. And it could open the door to more natural machine-learning software, although many hurdles remain before it could be used commercially.

Artificial intelligence software has increasingly begun to imitate the brain. Algorithms such as Google’s automatic image-classification and language-learning programs use networks of artificial neurons to perform complex tasks. But because conventional computer hardware was not designed to run brain-like algorithms, these machine-learning tasks require orders of magnitude more computing power than the human brain does.

“There must be a better way to do this, because nature has figured out a better way to do this,” says Michael Schneider, a physicist at the US National Institute of Standards and Technology (NIST) in Boulder, Colorado, and a co-author of the study.

The article is here.

Sunday, February 18, 2018

Responsibility and Consciousness

Matt King and Peter Carruthers

1. Introduction

Intuitively, consciousness matters for responsibility. A lack of awareness generally provides the
basis for an excuse, or at least for blameworthiness to be mitigated. If you are aware that what
you are doing will unjustifiably harm someone, it seems you are more blameworthy for doing so
than if you harm them without awareness. There is thus a strong presumption that consciousness
is important for responsibility. The position we stake out below, however, is that consciousness,
while relevant to moral responsibility, isn’t necessary.

The background for our discussion is an emerging consensus in the cognitive sciences
that a significant portion, perhaps even a substantial majority, of our mental lives takes place
unconsciously. For example, routine and habitual actions are generally guided by the so-called
“dorsal stream” of the visual system, whose outputs are inaccessible to consciousness (Milner &
Goodale 1995; Goodale 2014). And there has been extensive investigation of the processes that
accompany conscious as opposed to unconscious forms of experience (Dehaene 2014). While
there is room for disagreement at the margins, there is little doubt that our actions are much more
influenced by unconscious factors than might intuitively seem to be the case. At a minimum,
therefore, theories of responsibility that ignore the role of unconscious factors supported by the
empirical data proceed at their own peril (King & Carruthers 2012). The crucial area of inquiry
for those interested in the relationship between consciousness and responsibility concerns the
relative strength of that relationship and the extent to which it should be impacted by findings in
the empirical sciences.

The paper is here.

Wednesday, January 31, 2018

The Fear Factor

Matthieu Ricard
Medium.com
Originally published January 5, 2018

Here is an excerpt:

Research by Abigail Marsh and other neuroscientists reveals that psychopaths’ brains are marked by a dysfunction in the structure called the amygdala that is responsible for essential social and emotional function. In psychopaths, the amygdala is not only under-responsive to images of people experiencing fear, but is also up to 20% smaller than average.

Marsh also wondered about people who are at the other end of the spectrum, extreme altruists: people filled with compassion, people who volunteer, for example, to donate one of their kidneys to a stranger. The answer is remarkable: extreme altruists surpass everyone in detecting expressions of fear in others and, while they do experience fear themselves, that does not stop them from acting in ways that are considered very courageous.

Since her initial discovery, several studies have confirmed that the ability to label other peoples’ fear predicts altruism better than gender, mood or how compassionate people claim to be. In addition, Abigail Marsh found that, among extreme altruists, the amygdala is physically larger than the average by about 8%. The significance of this fact held up even after finding something rather unexpected: the altruists’s brains are in general larger than those of the average person.

The information is here.

Monday, January 29, 2018

Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others

Leonardo Christov-Moore, Paul Conway, and Marco Iacoboni
Front. Integr. Neurosci., 12 December 2017

The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm.

The article is here.

Saturday, January 27, 2018

Evolving Morality

Joshua Greene
Aspen Ideas Festival
2017

Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.

The one-hour talk on SoundCloud is here.

Friday, January 19, 2018

AI is Fueling Smarter Prosthetics Than Ever Before

Andrea Powell
www.wired.com
Originally posted December 22, 2017

The distance between prosthetic and real is shrinking. Thanks to advances in batteries, brain-controlled robotics, and AI, today’s mechanical limbs can do everything from twist and point to grab and lift. And this isn’t just good news for amputees. “For something like bomb disposal, why not use a robotic arm?” says Justin Sanchez, manager of Darpa’s Revolutionizing Prosthetics program. Well, that would certainly be handy.

The article and pictures are here.

Tuesday, January 16, 2018

3D Printed Biomimetic Blood Brain Barrier Eliminates Need for Animal Testing

Hannah Rose Mendoza
3Dprint.com
Originally published December 21, 2017

The blood-brain barrier (BBB) may sound like a rating system for avoiding horror movies, but in reality it is a semi-permeable membrane responsible for restricting and regulating the entry of neurotoxic compounds, diseases, and circulating blood into the brain. It exists as a defense mechanism to protect the brain from direct contact with damaging entities carried in the body. Normally, this is something that is important to maintain as a strong defense; however, there are times when medical treatments require the ability to trespass beyond this biological barrier without damaging it. This is especially true now in the era of nanomedicine, when therapeutic treatments have been developed to combat brain cancer, neurodegenerative diseases, and even the effects of trauma-based brain damage.

In order to advance medical research in these important areas, it has been important to operate in an environment that accurately represents the BBB. As such, researchers have turned to animal subjects, something which comes with significant ethical and moral questions.

The story is here.

Monday, January 15, 2018

Lesion network localization of criminal behavior

R. Ryan Darby Andreas Horn, Fiery Cushman, and Michael D. Fox
The Proceedings of the National Academy of Sciences

Abstract

Following brain lesions, previously normal patients sometimes exhibit criminal behavior. Although rare, these cases can lend unique insight into the neurobiological substrate of criminality. Here we present a systematic mapping of lesions with known temporal association to criminal behavior, identifying 17 lesion cases. The lesion sites were spatially heterogeneous, including the medial prefrontal cortex, orbitofrontal cortex, and different locations within the bilateral temporal lobes. No single brain region was damaged in all cases. Because lesion-induced symptoms can come from sites connected to the lesion location and not just the lesion location itself, we also identified brain regions functionally connected to each lesion location. This technique, termed lesion network mapping, has recently identified regions involved in symptom generation across a variety of lesion-induced disorders. All lesions were functionally connected to the same network of brain regions. This criminality-associated connectivity pattern was unique compared with lesions causing four other neuropsychiatric syndromes. This network includes regions involved in morality, value-based decision making, and theory of mind, but not regions involved in cognitive control or empathy. Finally, we replicated our results in a separate cohort of 23 cases in which a temporal relationship between brain lesions and criminal behavior was implied but not definitive. Our results suggest that lesions in criminals occur in different brain locations but localize to a unique resting state network, providing insight into the neurobiology of criminal behavior.

Significance

Cases like that of Charles Whitman, who murdered 16 people after growth of a brain tumor, have sparked debate about why some brain lesions, but not others, might lead to criminal behavior. Here we systematically characterize such lesions and compare them with lesions that cause other symptoms. We find that lesions in multiple different brain areas are associated with criminal behavior. However, these lesions all fall within a unique functionally connected brain network involved in moral decision making. Furthermore, connectivity to competing brain networks predicts the abnormal moral decisions observed in these patients. These results provide insight into why some brain lesions, but not others, might predispose to criminal behavior, with potential neuroscience, medical, and legal implications.

The article is here.

Thursday, November 23, 2017

Tiny human brain organoids implanted into rodents, triggering ethical concerns

Sharon Begley
STAT News
Originally posted November 6, 2017

Here is an excerpt:

He and his colleagues discussed the ethics of implanting human brain organoids into rats, including whether the animals might become too human. “Some of what people warn about is still science fiction,” he said. “Right now, the organoids are so crude we probably decrease” the rats’ brain function.

Ethicists argue that “not a problem now” doesn’t mean “never a problem.” One concern raised by the human brain organoid implants “is that functional integration [of the organoids] into the central nervous system of animals can in principle alter an animal’s behavior or needs,” said bioethicist Jonathan Kimmelman of McGill University in Montreal. “The task, then, is to carefully monitor if such alterations occur.” If the human implant gives an animal “increased sentience or mental capacities,” he added, it might suffer more.

Would it feel like a human trapped in a rodent’s body? Because both the Salk and Penn experiments used adult rodents, their brains were no longer developing, unlike the case if implants had been done with fetal rodent brains. “It’s hard to imagine how human-like cognitive capacities, like consciousness, could emerge under such circumstances,” Kimmelman said, referring to implants into an adult rodent brain. Chen agreed: He said his experiment “carries much less risk of creating animals with greater ‘brain power’ than normal” because the human organoid goes into “a specific region of already developed brain.”

The belief that consciousness is off the table is in fact the subject of debate. An organoid would need to be much more advanced than today’s to experience consciousness, said the Allen Institute’s Koch, including having dense neural connections, distinct layers, and other neuro-architecture. But if those and other advances occur, he said, “then the question is very germane: Does this piece of cortex feel something?” Asked whether brain organoids can achieve consciousness without sensory organs and other means of perceiving the world, Koch said it would experience something different than what people and other animals do: “It raises the question, what is it conscious of?”

The article is here.

Saturday, November 18, 2017

Differential inter-subject correlation of brain activity when kinship is a variable in moral dilemma

Mareike Bacha-Trams, Enrico Glerean, Robin Dunbar, Juha M. Lahnakoski, and others
Scientific Reports 7, Article number: 14244

Abstract

Previous behavioural studies have shown that humans act more altruistically towards kin. Whether and how knowledge of genetic relatedness translates into differential neurocognitive evaluation of observed social interactions has remained an open question. Here, we investigated how the human brain is engaged when viewing a moral dilemma between genetic vs. non-genetic sisters. During functional magnetic resonance imaging, a movie was shown, depicting refusal of organ donation between two sisters, with subjects guided to believe the sisters were related either genetically or by adoption. Although 90% of the subjects self-reported that genetic relationship was not relevant, their brain activity told a different story. Comparing correlations of brain activity across all subject pairs between the two viewing conditions, we found significantly stronger inter-subject correlations in insula, cingulate, medial and lateral prefrontal, superior temporal, and superior parietal cortices, when the subjects believed that the sisters were genetically related. Cognitive functions previously associated with these areas include moral and emotional conflict regulation, decision making, and mentalizing, suggesting more similar engagement of such functions when observing refusal of altruism from a genetic sister. Our results show that mere knowledge of a genetic relationship between interacting persons robustly modulates social cognition of the perceiver.

The article is here.

Monday, November 13, 2017

Medical Evidence Debated

Ralph Bartholdt
Coeur d’Alene Press 
Originally posted October 27, 2017

Here is an excerpt:

“The point of this is not that he had a choice,” he said. “But what’s been loaded into his system, what’s he’s making the choices with.”

Thursday’s expert witness, psychologist Richard Adler, further developed the argument that Renfro suffered from a brain disorder evidenced by a series of photograph-like images of Renfro’s brain that showed points of trauma. He pointed out degeneration of white matter responsible for transmitting information from the front to the back of the brain, and shrunken portions on one side of the brain that were not symmetrical with their mirror images on the other side.

Physical evidence coinciding with the findings include Renfro’s choppy speech patterns and mannerisms as well inabilities to make cognitive connections, and his lack of social skills, Adler said.

Defense attorney Jay Logsdon asked if the images were obtained through a discredited method, one that has “been attacked as junk science?”

The method, called QEEG, for quantitative electroencephalogram, which uses electrical patterns that show electrical activity inside the brain’s cortex to determine impairment, was attacked in an article in 1997. The article’s criticism still stands today, Adler said.

Throughout the morning and into the afternoon, Adler reiterated findings, linking them to the defendant’s actions, and dovetailing them into other test results, psychological and cognitive, that have been conducted while Renfro has been incarcerated in the Kootenai County Jail.

The article is here.

Saturday, October 28, 2017

Post-conventional moral reasoning is associated with increased ventral striatal activity at rest and during task

Zhuo Fang, Wi Hoon Jung, Marc Korczykowski, Lijuan Luo, and others
Scientific Reports 7, Article number: 7105 (2017)

Abstract

People vary considerably in moral reasoning. According to Kohlberg’s theory, individuals who reach the highest level of post-conventional moral reasoning judge moral issues based on deeper principles and shared ideals rather than self-interest or adherence to laws and rules. Recent research has suggested the involvement of the brain’s frontostriatal reward system in moral judgments and prosocial behaviors. However, it remains unknown whether moral reasoning level is associated with differences in reward system function. Here, we combined arterial spin labeling perfusion and blood oxygen level-dependent functional magnetic resonance imaging and measured frontostriatal reward system activity both at rest and during a sequential risky decision making task in a sample of 64 participants at different levels of moral reasoning. Compared to individuals at the pre-conventional and conventional level of moral reasoning, post-conventional individuals showed increased resting cerebral blood flow in the ventral striatum and ventromedial prefrontal cortex. Cerebral blood flow in these brain regions correlated with the degree of post-conventional thinking across groups. Post-conventional individuals also showed greater task-induced activation in the ventral striatum during risky decision making. These findings suggest that high-level post-conventional moral reasoning is associated with increased activity in the brain’s frontostriatal system, regardless of task-dependent or task-independent states.

The article is here.

Thursday, August 24, 2017

Brain Augmentation: How Scientists are Working to Create Cyborg Humans with Super Intelligence

Hannah Osborne
Newsweek
Originally published June 14, 2017

For most people, the idea of brain augmentation remains in the realms of science fiction. However, for scientists across the globe, it is fast becoming reality—with the possibility of humans with “super-intelligence” edging ever closer.

In laboratory experiments on rats, researchers have already been able to transfer memories from one brain to another. Future projects include the development of telepathic communication and the creation of “cyborgs,” where humans have advanced abilities thanks to technological interventions.

Scientists Mikhail Lebedev, Ioan Opris and Manuel Casanova have now published a comprehensive collection of research into brain augmentation, and their efforts have won a major European science research prize—the Frontiers Spotlight Award. This $100,000 prize is for the winners to set up a conference that highlights emerging research in their field.

Project leader Lebedev, a senior researcher at Duke University, North Carolina, said the reality of brain augmentation—where intelligence is enhanced by brain implants—will be part of everyday life by 2030, and that “people will have to deal with the reality of this new paradigm.”

Their collection, Augmentation of brain function: facts, fiction and controversy, was published by Frontiers and includes almost 150 research articles by more than 600 contributing authors. It focuses on current brain augmentation, future proposals and the ethical and legal implications the topic raises.

The article is here.

Friday, August 11, 2017

The real problem (of consciousness)

Anil K Seth
Aeon.com
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Saturday, July 15, 2017

How do self-interest and other-need interact in the brain to determine altruistic behavior?

Jie Hu, Yue Li, Yunlu Yin, Philip R. Blue, Hongbo Yu, Xiaolin Zhou
NeuroImage
Volume 157, 15 August 2017, Pages 598–611

Abstract

Altruistic behavior, i.e., promoting the welfare of others at a cost to oneself, is subserved by the integration of various social, affective, and economic factors represented in extensive brain regions. However, it is unclear how different regions interact to process/integrate information regarding the helper's interest and recipient's need when deciding whether to behave altruistically. Here we combined an interactive game with functional Magnetic Resonance Imaging (fMRI) and transcranial direct current stimulation (tDCS) to characterize the neural network underlying the processing/integration of self-interest and other-need. At the behavioral level, high self-risk decreased helping behavior and high other-need increased helping behavior. At the neural level, activity in medial prefrontal cortex (MPFC) and right dorsolateral prefrontal cortex (rDLPFC) were positively associated with self-risk levels, and activity in right inferior parietal lobe (rIPL) and rDLPFC were negatively associated with other-need levels. Dynamic causal modeling further suggested that both MPFC and rIPL were extrinsically connected to rDLPFC; high self-risk enhanced the effective connectivity from MPFC to rDLPFC, and the modulatory effect of other-need on the connectivity from rIPL to rDLPFC positively correlated with the modulatory effect of other-need on individuals’ helping rate. Two tDCS experiments provided causal evidence that rDLPFC affects both self-interest and other-need concerns, and rIPL selectively affects the other-need concerns. These findings suggest a crucial role of the MPFC-IPL-DLPFC network during altruistic decision-making, with rDLPFC as a central node for integrating and modulating motives regarding self-interest and other-need.

The article is here.

Thursday, July 13, 2017

Professors lead call for ethical framework for new 'mind control' technologies

Medical Xpress
Originally published July 6, 2017

Here is an excerpt:

As advances in molecular biology and chemical engineering are increasing the precision of pharmaceuticals, even more spatially-targeted technologies are emerging. New noninvasive treatments send electrical currents or magnetic waves through the scalp, altering the ability of neurons in a targeted region to fire. Surgical interventions are even more precise; they include implanted electrodes that are designed to quell seizures before they spread, or stimulate the recall of memories after a traumatic brain injury.

Research into the brain's "wiring"—how neurons are physically connected in networks that span disparate parts of the brain—and how this wiring relates to changing mental states has enabled principles from control theory to be applied to neuroscience. For example, a recent study by Bassett and colleagues shows how changes in brain wiring from childhood through adolescence leads to greater executive function, or the ability to consciously control one's thoughts and attention.

While insights from network science and control theory may support new treatments for conditions like obsessive compulsive disorder and traumatic brain injury, the researchers argue that clinicians and bioethicists must be involved in the earliest stages of their development. As the positive effects of treatments become more profound, so do their potential side effects.

"New methods of controlling mental states will provide greater precision in treatments," Sinnott-Armstrong said, "and we thus need to think hard about the ensuing ethical issues regarding autonomy, privacy, equality and enhancement."

The article is here.