Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Brains. Show all posts
Showing posts with label Brains. Show all posts

Monday, March 29, 2021

The problem with prediction

Joseph Fridman
aeon.com
Originally published 25 Jan 21

Here is an excerpt:

Today, many neuroscientists exploring the predictive brain deploy contemporary economics as a similar sort of explanatory heuristic. Scientists have come a long way in understanding how ‘spending metabolic money to build complex brains pays dividends in the search for adaptive success’, remarks the philosopher Andy Clark, in a notable review of the predictive brain. The idea of the predictive brain makes sense because it is profitable, metabolically speaking. Similarly, the psychologist Lisa Feldman Barrett describes the primary role of the predictive brain as managing a ‘body budget’. In this view, she says, ‘your brain is kind of like the financial sector of a company’, predictively allocating resources, spending energy, speculating, and seeking returns on its investments. For Barrett and her colleagues, stress is like a ‘deficit’ or ‘withdrawal’ from the body budget, while depression is bankruptcy. In Blackmore’s day, the brain was made up of sentries and soldiers, whose collective melancholy became the sadness of the human being they inhabited. Today, instead of soldiers, we imagine the brain as composed of predictive statisticians, whose errors become our neuroses. As the neuroscientist Karl Friston said: ‘[I]f the brain is an inference machine, an organ of statistics, then when it goes wrong, it’ll make the same sorts of mistakes a statistician will make.’

The strength of this association between predictive economics and brain sciences matters, because – if we aren’t careful – it can encourage us to reduce our fellow humans to mere pieces of machinery. Our brains were never computer processors, as useful as it might have been to imagine them that way every now and then. Nor are they literally prediction engines now and, should it come to pass, they will not be quantum computers. Our bodies aren’t empires that shuttle around sentrymen, nor are they corporations that need to make good on their investments. We aren’t fundamentally consumers to be tricked, enemies to be tracked, or subjects to be predicted and controlled. Whether the arena be scientific research or corporate intelligence, it becomes all too easy for us to slip into adversarial and exploitative framings of the human; as Galison wrote, ‘the associations of cybernetics (and the cyborg) with weapons, oppositional tactics, and the black-box conception of human nature do not so simply melt away.’

Friday, November 20, 2020

When Did We Become Fully Human? What Fossils and DNA Tell Us About the Evolution of Modern Intelligence

Nick Longrich
singularityhub.com
Originally posted 18 OCT 2020 

Here are two excerpts:

Because the fossil record is so patchy, fossils provide only minimum dates. Human DNA suggests even earlier origins for modernity. Comparing genetic differences between DNA in modern people and ancient Africans, it’s estimated that our ancestors lived 260,000 to 350,000 years ago. All living humans descend from those people, suggesting that we inherited the fundamental commonalities of our species, our humanity, from them.

All their descendants—Bantu, Berber, Aztec, Aboriginal, Tamil, San, Han, Maori, Inuit, Irish—share certain peculiar behaviors absent in other great apes. All human cultures form long-term pair bonds between men and women to care for children. We sing and dance. We make art. We preen our hair, adorn our bodies with ornaments, tattoos and makeup.

We craft shelters. We wield fire and complex tools. We form large, multigenerational social groups with dozens to thousands of people. We cooperate to wage war and help each other. We teach, tell stories, trade. We have morals, laws. We contemplate the stars, our place in the cosmos, life’s meaning, what follows death.

(cut)

First, we journeyed out of Africa, occupying more of the planet. There were then simply more humans to invent, increasing the odds of a prehistoric Steve Jobs or Leonardo da Vinci. We also faced new environments in the Middle East, the Arctic, India, Indonesia, with unique climates, foods and dangers, including other human species. Survival demanded innovation.

Many of these new lands were far more habitable than the Kalahari or the Congo. Climates were milder, but Homo sapiens also left behind African diseases and parasites. That let tribes grow larger, and larger tribes meant more heads to innovate and remember ideas, more manpower, and better ability to specialize. Population drove innovation.

Thursday, November 1, 2018

How much control do you really have over your actions?

Michael Price
Sciencemag.org
Originally posted October 1, 2018

Here is an excerpt:

Philosophers have wrestled with questions of free will—that is, whether we are active drivers or passive observers of our decisions—for millennia. Neuroscientists tap-dance around it, asking instead why most of us feel like we have free will. They do this by looking at rare cases in which people seem to have lost it.

Patients with both alien limb syndrome and akinetic mutism have lesions in their brains, but there doesn’t seem to be a consistent pattern. So Darby and his colleagues turned to a relatively new technique known as lesion network mapping.

They combed the literature for brain imaging studies of both types of patients and mapped out all of their reported brain lesions. Then they plotted those lesions onto maps of brain regions that reliably activate together at the same time, better known as brain networks. Although the individual lesions in patients with the rare movement disorders appeared to occur without rhyme or reason, the team found, those seemingly arbitrary locations fell within distinct brain networks.

The researchers compared their results with those from people who lost some voluntary movement after receiving temporary brain stimulation, which uses low-voltage electrodes or targeted magnetic fields to temporarily “knock offline” brain regions.

The networks that caused loss of voluntary movement or agency in those studies matched Darby and colleagues’ new lesion networks. This suggests these networks are involved in voluntary movement and the perception that we’re in control of, and responsible for, our actions, the researchers report today in the Proceedings of the National Academy of Sciences.

The info is here.

Tuesday, March 27, 2018

Neuroblame?

Stephen Rainey
Practical Ethics
Originally posted February 15, 2018

Here is an excerpt:

Rather than bio-mimetic prostheses, replacement limbs and so on, we can predict that technologies superior to the human body will be developed. Controlled by the brains of users, these enhancements will amount to extensions of the human body, and allow greater projection of human will and intentions in the world. We might imagine a cohort of brain controlled robots carrying out mundane tasks around the home, or buying groceries and so forth, all while the user gets on with something altogether more edifying (or does nothing at all but trigger and control their bots). Maybe a highly skilled, and well-practised, user could control legions of such bots, each carrying out separate tasks.

Before getting too carried away with this line of thought, it’s probably worth getting to the point. The issue worth looking at concerns what happens when things go wrong. It’s one thing to imagine someone sending out a neuro-controlled assassin-bot to kill a rival. Regardless of the unusual route taken, this would be a pretty simple case of causing harm. It would be akin to someone simply assassinating their rival with their own hands. However, it’s another thing to consider how sloppily framing the goal for a bot, such that it ends up causing harm, ought to be parsed.

The blog post is here.