Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Brain. Show all posts
Showing posts with label Brain. Show all posts

Friday, April 21, 2017

Facebook plans ethics board to monitor its brain-computer interface work

Josh Constine
Tech Crunch
Originally posted April 19, 2017

Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”

Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.

The article is here.

Friday, March 31, 2017

Dishonesty gets easier on the brain the more you do it

Neil Garrett
Aeon
Originally published March 7, 2017

Here are two excerpts:

These two ideas – the role of arousal on our willingness to cheat, and neural adaptation – are connected because the brain does not just adapt to things such as sounds and smells. The brain also adapts to emotions. For example, when presented with aversive pictures (eg, threatening faces) or receiving something unpleasant (eg, an electric shock), the brain will initially generate strong responses in regions associated with emotional processing. But when these experiences are repeated over time, the emotional responses diminish.

(cut)

There have also been a number of behavioural interventions proposed to curb unethical behaviour. These include using cues that emphasise morality and encouraging self-engagement. We don’t currently know the underlying neural mechanisms that can account for the positive behavioural changes these interventions drive. But an intriguing possibility is that they operate in part by shifting up our emotional reaction to situations in which dishonesty is an option, in turn helping us to resist the temptation to which we have become less resistant over time.

The article is here.

Wednesday, March 15, 2017

Will the 'hard problem' of consciousness ever be solved?

David Papineau
The Question
Originally published February 21, 2017

Here is an excerpt:

The problem, if there is one, is that we find the reduction of consciousness to brain processes very hard to believe. The flaw lies in us, not in the neuroscientific account of consciousness. Despite all the scientific evidence, we can’t free ourselves of the old-fashioned dualist idea that conscious states inhabit some extra dualist realm outside the physical brain.

Just consider how the hard problem is normally posed. Why do brain states give rise to conscious feelings? That is already dualist talk. If one thing gives rise to another, they must be separate. Fire give rise to smoke, but H2O doesn’t give rise to water. So the very terminology presupposes that the conscious mind is different from the physical brain—which of course then makes us wonder why the brain generates this mysterious extra thing. On the other hand, if only we could properly accept that the mind just is the brain, then we would be no more inclined to ask why ‘they’ go together than we ask why H20 is water.

The article is here.

There is also a 5 minute video by Massimo Pigliucci on how the hard problem is a categorical mistake on this page.

Wednesday, March 8, 2017

A Computer to Rival

Kelly Clancy  
The New Yorker
February 15, 2017

Here is an excerpt:

Computers are often likened to brains, but they work in a manner foreign to biology. The computing architecture still in use today was first described by the mathematician John von Neumann and his colleagues in 1945. A modern laptop is conceptually identical to the punch-card behemoths of the past, although engineers have traded paper for a purely electric stream of on-off signals. In a von Neumann machine, all data-crunching happens in the central processing unit (C.P.U.). Program instructions, then data, flow from the computer’s memory to its C.P.U. in an orderly series of zeroes and ones, much like a stack of punch cards shuffling through. Although multicore computers allow some processing to occur in parallel, their efficacy is limited: software engineers must painstakingly choreograph these streams of information to avoid catastrophic system errors. In the brain, by contrast, data run simultaneously through billions of parallel processors—that is, our neurons. Like computers, they communicate in a binary language of electrical spikes. The difference is that each neuron is pre-programmed, whether through genetic patterning or learned associations, to share its computations directly with the proper targets. Processing unfolds organically, without the need for a C.P.U.

The article is here.

Note: Consciousness is a product of evolution. Artificial intelligence is a product of evolved brains.

Tuesday, December 27, 2016

Is Addiction a Brain Disease?

Kent C. Berridge
Neuroethics (2016). pp 1-5.
doi:10.1007/s12152-016-9286-3

Abstract

Where does normal brain or psychological function end, and pathology begin? The line can be hard to discern, making disease sometimes a tricky word. In addiction, normal ‘wanting’ processes become distorted and excessive, according to the incentive-sensitization theory. Excessive ‘wanting’ results from drug-induced neural sensitization changes in underlying brain mesolimbic systems of incentive. ‘Brain disease’ was never used by the theory, but neural sensitization changes are arguably extreme enough and problematic enough to be called pathological. This implies that ‘brain disease’ can be a legitimate description of addiction, though caveats are needed to acknowledge roles for choice and active agency by the addict. Finally, arguments over ‘brain disease’ should be put behind us. Our real challenge is to understand addiction and devise better ways to help. Arguments over descriptive words only distract from that challenge.

The article is here.

Sunday, July 31, 2016

Neural mechanisms underlying the impact of daylong cognitive work on economic decisions

Bastien Blain, Guillaume Hollard, and Mathias Pessiglione
PNAS 2016 113 (25) 6967-6972

Abstract

The ability to exert self-control is key to social insertion and professional success. An influential literature in psychology has developed the theory that self-control relies on a limited common resource, so that fatigue effects might carry over from one task to the next. However, the biological nature of the putative limited resource and the existence of carry-over effects have been matters of considerable controversy. Here, we targeted the activity of the lateral prefrontal cortex (LPFC) as a common substrate for cognitive control, and we prolonged the time scale of fatigue induction by an order of magnitude. Participants performed executive control tasks known to recruit the LPFC (working memory and task-switching) over more than 6 h (an approximate workday). Fatigue effects were probed regularly by measuring impulsivity in intertemporal choices, i.e., the propensity to favor immediate rewards, which has been found to increase under LPFC inhibition. Behavioral data showed that choice impulsivity increased in a group of participants who performed hard versions of executive tasks but not in control groups who performed easy versions or enjoyed some leisure time. Functional MRI data acquired at the start, middle, and end of the day confirmed that enhancement of choice impulsivity was related to a specific decrease in the activity of an LPFC region (in the left middle frontal gyrus) that was recruited by both executive and choice tasks. Our findings demonstrate a concept of focused neural fatigue that might be naturally induced in real-life situations and have important repercussions on economic decisions.

Significance

In evolved species, resisting the temptation of immediate rewards is a critical ability for the achievement of long-term goals. This self-control ability was found to rely on the lateral prefrontal cortex (LPFC), which also is involved in executive control processes such as working memory or task switching. Here we show that self-control capacity can be altered in healthy humans at the time scale of a workday, by performing difficult executive control tasks. This fatigue effect manifested in choice impulsivity was linked to reduced excitability of the LPFC following its intensive utilization over the day. Our findings might have implications for designing management strategies that would prevent daylong cognitive work from biasing economic decisions.

The research is here.


Wednesday, June 22, 2016

A New Theory Explains How Consciousness Evolved

Michael Graziano
The Atlantic
Originally posted June 6, 2016

Here is an excerpt:

The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species.

Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.

The article is here.

Saturday, June 4, 2016

Scientists show how we start stereotyping the moment we see a face

Sarah Kaplan
The Independent
Originally posted May 2, 2016

Scientists have known for a while that stereotypes warp our perceptions of things. Implicit biases — those unconscious assumptions that worm their way into our brains, without our full awareness and sometimes against our better judgment — can influence grading choices from teachers, split-second decisions by police officers and outcomes in online dating.

We can't even see the world without filtering it through the lens of our assumptions, scientists say. In a study published Monday in the journal Nature Neuroscience, psychologists report that the neurons that respond to things such as sex, race and emotion are linked by stereotypes, distorting the way we perceive people's faces before that visual information even reaches our conscious brains.

The article is here.

Sunday, May 8, 2016

Neuroscience is changing the debate over what role age should play in the courts

By Tim Requarth
Newsweeek
Originally posted April 18, 2016

Here is an excerpt:

The Supreme Court has increasingly called upon new findings in neuroscience and psychology in a series of rulings over the past decade (Roper v. Simmons, Graham v. Florida, Miller v. Alabama and Montgomery v. Louisiana) that prohibited harsh punishments—such as the death penalty and mandatory life without parole—for offenders under 18. Due to their immaturity, the argument goes, they are less culpable and so deserve less punishment than those 18 or older. In addition, because their wrongdoing is often the product of immaturity, younger criminals may have a greater potential for reform. Now people are questioning whether the age of 18 has any scientific meaning.

“People are not magically different on their 18th birthday,” says Elizabeth Scott, a professor of law at Columbia University whose work was cited in the seminal Roper case. “Their brains are still maturing, and the criminal justice system should find a way to take that into account.”

The article is here.

Friday, March 25, 2016

Probing the relationship between brain activity and moral judgments of children

ScienceCodex News
Originally published March 9, 2016

Here is an excerpt:

To determine whether the early automatic or later controlled neural activity predicted actual moral behavior, the researchers then assessed the children's generosity based on how many stickers they were willing to share with an anonymous child. They then correlated the children's generosity with individual differences in brain activity generated during helping versus harming scenes. Only differences in brain signals associated with deliberate neural processing predicted the children's sharing behavior, suggesting that moral behavior in children depends more on controlled reflection than on an immediate emotional response.

The article is here.

Friday, January 15, 2016

Hive consciousness

By Peter Watts
Aeon
Originally published May 27, 2015

Here is an excerpt:

What are the implications of a technology that seems to be converging on the sharing of consciousness?

It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. The neuroscientist Giulio Tononi at the University of Wisconsin-Madison claims that consciousness reflects the integration of distributed brain functions. A model developed by Ezequiel Morsella, of San Francisco State University, describes it as a mediator between conflicting motor commands. The panpsychics regard it as a basic property of matter – like charge, or mass – and believe that our brains don’t generate the stuff so much as filter it from the ether like some kind of organic spirit-catchers. Neuroscience superstar V S Ramachandran (University of California in San Diego) blames everything on mirror neurons; Princeton’s Michael Graziano – right here in Aeon – describes it as an experiential map.

I think they’re all running a game on us. Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention).

But why would any of that be self-aware?

If physics is right – if everything ultimately comes down to matter, energy and numbers – then any sufficiently accurate copy of a thing will manifest the characteristics of that thing. Sapience should therefore emerge from any physical structure that replicates the relevant properties of the brain.

The article is here.

Tuesday, August 25, 2015

The Lion, the Myth, and the Morality Tale

By Brandon Ferdig
The American Thinker
Originally posted August 8, 2015

Here is an excerpt:

There’s nothing inherently wrong with myth and symbolism. They are emotional-mental tools used to categorize our world, to seek its improvement, to add meaning, to sink our emotional teeth into life and cultivate richness around our experience. Epic is awesome.

It was awesome for those who cried when seeing Barack Obama elected because of the interpreted representative step forward and victory of our nation. It’s awesome to feel moved by the sight of an animal that represents and elicits majesty. And it’s awesome to find other like-minded folks and bond in celebration or fight for a better world.

But there’s a risk.

To the degree that we subscribe to a particular ideology is the potential for us to color the events of our world with its tint. Suddenly we have something invested into these events -- our world view, our ego -- and exaggerated responses result. We’ll fight to defend our ideology, details and facts be damned. Get with like-minded folks, and you can create a mob.

The entire article is here.

Tuesday, August 18, 2015

What Emotions Are (and Aren’t)

By Lisa Feldman Barrett
The New York Times
Originally published July 31, 2015

Here is an excerpt:

Brain regions like the amygdala are certainly important to emotion, but they are neither necessary nor sufficient for it. In general, the workings of the brain are not one-to-one, whereby a given region has a distinct psychological purpose. Instead, a single brain area like the amygdala participates in many different mental events, and many different brain areas are capable of producing the same outcome. Emotions like fear and anger, my lab has found, are constructed by multipurpose brain networks that work together.

If emotions are not distinct neural entities, perhaps they have a distinct bodily pattern — heart rate, respiration, perspiration, temperature and so on?

Again, the answer is no.

The entire article is here.

Sunday, August 2, 2015

Is Consciousness an Engineering Problem?

We could build an artificial brain that believes itself to be conscious. Does that mean we have solved the hard problem?

By Michael Graziano
Aeon Magazine
Originally published July 10, 2015

Here is an excerpt:

As long as scholars think of consciousness as a magic essence floating inside the brain, it won’t be very interesting to engineers. But if it’s a crucial set of information, a kind of map that allows the brain to function correctly, then engineers may want to know about it. And that brings us back to artificial intelligence. Gone are the days of waiting for computers to get so complicated that they spontaneously become conscious. And gone are the days of dismissing consciousness as an airy-fairy essence that would bring no obvious practical benefit to a computer anyway. Suddenly it becomes an incredibly useful tool for the machine.

The entire article is here.

Sunday, June 21, 2015

How the brain makes decisions

Science Simplified
Originally published on May 25, 2015

Here are two excerpts:

The results of the study drew three major conclusions. First, that human decision-making can perform just as well as current sophisticated computer models under non-Markovian conditions, such as the presence of a switch-state. This is a significant finding in our current efforts to model the human brain and develop artificial intelligence systems.

Secondly, that delayed feedback significantly impairs human decision-making and learning, even though it does not impact the performance of computer models, which have perfect memory. In the second experiment, it took human participants ten times more attempts to correctly recall and assign arrows to icons. Feedback is a crucial element of decision-making and learning. We set a goal, make a decision about how to achieve it, act accordingly, and then find out whether or not our goal was met. In some cases, e.g. learning to ride a bike, feedback on every decision we make for balancing, pedaling, braking etc. is instant: either we stay up and going, or we fall down. But in many other cases, such as playing backgammon, feedback is significantly delayed; it can take a while to find out if each move has led us to victory or not.

The entire article is here.

Source Material:

Clarke AM, Friedrich J, Tartaglia EM, Marchesotti S, Senn W, Herzog MH. Human and Machine Learning in Non-Markovian Decision Making. PLoS One 21 April 2015.

Wednesday, April 15, 2015

The disremembered

Dementia undermines all of our philosophical assumptions about the coherence of the self. But that might be a good thing

By Charles Leadbeater
Aeon
Originally published March 26, 2015

Here are two excerpts:

The memory-based account of identity is powerful, deeply rooted and dangerously partial. It will direct us to potential memory cures – a mixture of implants and drugs – that will almost certainly disappoint as much as they excite. Memory is not created in a little box in the brain, but by diffuse and dispersed circuits of neurons firing in concert. Someone with dementia would need more than an implant: they would need their brain refreshed and rewired. And still the nagging question would remain: are they the same person?

(cut)

The notion of an embedded identity takes us into much more fertile territory when it comes to considering meaningful care for dementia sufferers. It implies that the main challenge is to work imaginatively and empathetically to find common ground, creating conversational topics and cues that help make connections with people, despite their failing memory. As the British psychologist Oliver James explains in Contented Dementia (2008), this requires more skill and persistence than most conversations demand, precisely because its pre-suppositions cannot be taken for granted. My 85-year-old mother-in-law, for example, cannot always remember that she has a preserving pan, but that does not stop her enjoying making (and, even more, talking about making) marmalade.

The entire article is here.

Sunday, March 1, 2015

Online processing of moral transgressions: ERP evidence for spontaneous evaluation

Hartmut Leuthold, Angelika Kunkel, Ian G. Mackenzie and Ruth Filik
Soc Cogn Affect Neurosci (2015)
doi: 10.1093/scan/nsu151

Abstract

Experimental studies using fictional moral dilemmas indicate that both automatic emotional processes and controlled cognitive processes contribute to moral judgments. However, not much is known about how people process socio-normative violations that are more common to their everyday life nor the time-course of these processes. Thus, we recorded participants’ electrical brain activity while they were reading vignettes that either contained morally acceptable vs unacceptable information or text materials that contained information which was either consistent or inconsistent with their general world knowledge. A first event-related brain potential (ERP) positivity peaking at ∼200 ms after critical word onset (P200) was larger when this word involved a socio-normative or knowledge-based violation. Subsequently, knowledge-inconsistent words triggered a larger centroparietal ERP negativity at ∼320 ms (N400), indicating an influence on meaning construction. In contrast, a larger ERP positivity (larger late positivity), which also started at ∼320 ms after critical word onset, was elicited by morally unacceptable compared with acceptable words. We take this ERP positivity to reflect an implicit evaluative (good–bad) categorization process that is engaged during the online processing of moral transgressions.

The article is here.

Wednesday, January 14, 2015

Are we living in the age of the brain?

Understanding the brain won’t be done simply by mapping it down to the last synapse

By Philip Ball
Prospect Magazine
Originally published December 22 2014

Here is an excerpt:

Resolution of conflicting mental signals is certainly not ignored by cognitive scientists or psychologists, but there seems often to be a disjuncture between the neuroscientific model of the brain as a problem-solving network and the actual experience of the brain as a medley, even a bedlam, of imperatives and impulses. Sigmund Freud may have been wrong in seeking to present his psychoanalytic theory as a kind of science, but he was surely right to present the mind in terms of conflict rather than unity. One thing we do know about the brain is that it is not just a very large network of neurons, but is both very diverse (there are many different types of neuron, as well as non-neuronal cells called glia) and highly modular (different parts perform different, specialized roles). Mapping this architecture is an important goal, and there are some deeply impressive techniques for doing that. But the risk is that this is like trying to understand human culture using Google Earth—or rather, cultures, for there is just a single geography but plenty of conflicts, compromises and confusion going on within it.

None of this would be disputed by neuroscientists. But it perhaps highlights the distinctions between an understanding of the brain and an understanding of the mind. The implication seems to be that it is hard to develop one while you’re working on the other.

The entire article is here.

Thursday, January 1, 2015

10 Ways That Brain Myths Are Harming Us

By Christian Jarrett
Wired
Originally posted December 12, 2014

Here are two excerpts:

1). Many school teachers around the world believe neuromyths, such as the idea that children are left-brained or right-brained, or that we use just 10 per cent of our brains. This is worrying. For example, if a teacher decides a child is “left-brained” and therefore not inclined to creativity, they will likely divert that child away from beneficial creative activities.

(cut)

6). Brain training companies frequently make unfounded claims about the benefits of their products. One myth here is that playing their games can revolutionize your brain health, more than say socializing or reading. In October, dozens of neuroscientists wrote an open letter warning that the “exaggerated and misleading claims [of the brain training industry] exploit the anxiety of older adults about impending cognitive decline.”

The entire article is here.

Friday, October 24, 2014

Can Our Brains Handle the Information Age?

An Interview with Daniel Levitin
By Bret S. Stetka
Medscape
Originally posted September 24, 2014

In his new book, The Organized Mind, best-selling author and neuroscientist Daniel Levitin, PhD, discusses our brain's ability—or lack thereof—to process the dizzying flow of information brought on us by the digital age. Dr Levitin also suggests numerous ways of organizing mass information to make it more manageable. Medscape recently spoke with Dr Levitin about the neuroscience of information processing as well as approaches potentially useful to overworked clinicians.

The Fear of Information

Medscape: Your new book discusses how throughout history humans have been suspicious of increased access to information, from the printing press back to the first Sumerian writings. But I think most would agree that these were positive advancements. Do you think the current digital age weariness expressed by many is more of the same and that today's rapid technological progression will end up being a positive development for humanity? Or has the volume of data out there just gotten too big for the human brain to handle?

Dr Levitin: I have two minds about this. On one hand, there is this "same as it ever was" kind of complaint cycle. Seneca complained at the time of the ancient Greeks about the invention of writing—that it was going to weaken men's minds because they would no longer engage in thoughtful conversation. You couldn't interrogate the person who was telling you something, meaning that lies could be promulgated more easily and passed from generation to generation.

(cut)

If we look back at our evolutionary history, the amount of information that existed in the world just a few thousand years ago was really just a small percentage of what exists now. By some estimates, the amount of scientific and medical information produced in the last 25 years is equal to all of the information in all of human history up to that point.

The human brain can really only attend to a few things at once, so I think we are reaching a point where we have to figure out how to filter information so that we can use it more intelligently and not be distracted by irrelevant information. Studies show that people who are given more information in certain situations tend to make poorer decisions because they become distracted or overwhelmed by the irrelevant information.

The entire interview is here.