Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, January 31, 2018

The Fear Factor

Matthieu Ricard
Medium.com
Originally published January 5, 2018

Here is an excerpt:

Research by Abigail Marsh and other neuroscientists reveals that psychopaths’ brains are marked by a dysfunction in the structure called the amygdala that is responsible for essential social and emotional function. In psychopaths, the amygdala is not only under-responsive to images of people experiencing fear, but is also up to 20% smaller than average.

Marsh also wondered about people who are at the other end of the spectrum, extreme altruists: people filled with compassion, people who volunteer, for example, to donate one of their kidneys to a stranger. The answer is remarkable: extreme altruists surpass everyone in detecting expressions of fear in others and, while they do experience fear themselves, that does not stop them from acting in ways that are considered very courageous.

Since her initial discovery, several studies have confirmed that the ability to label other peoples’ fear predicts altruism better than gender, mood or how compassionate people claim to be. In addition, Abigail Marsh found that, among extreme altruists, the amygdala is physically larger than the average by about 8%. The significance of this fact held up even after finding something rather unexpected: the altruists’s brains are in general larger than those of the average person.

The information is here.

I Believe In Intelligent Design....For Robots

Matt Simon
Wired Magazine
Originally published January 3, 2018

Here is an excerpt:

Roboticists are honing their robots by essentially mimicking natural selection. Keep what works, throw out what doesn’t, to optimally adapt a robot to a particular job. “If we want to scrap something totally, we can do that,” says Nick Gravish, who studies the intersection of robotics and biology at UC San Diego. “Or we can take the best pieces from some design and put them in a new design and get rid of the things we don't need.” Think of it, then, like intelligent design—that follows the principles of natural selection.

The caveat being, biology is rather more inflexible than what roboticists are doing. After all, you can give your biped robot two extra limbs and turn it into a quadruped fairly quickly, while animals change their features—cave-dwelling species might lose their eyes, for instance—over thousands of years. “Evolution is as much a trap as a means to advance,” says Gerald Loeb, CEO and co-founder of SynTouch, which is giving robots the power to feel. “Because you get locked into a lot of hardware that worked well in previous iterations and now can't be changed because you've built your whole embryology on it.”

Evolution can still be rather explosive, though. 550 million years ago the Cambrian Explosion kicked off, giving birth to an incredible array of complex organisms. Before that, life was relatively squishier, relatively calmer. But then boom, predators a plenty, scrapping like hell to gain an edge.

The article is here.

Tuesday, January 30, 2018

Utilitarianism’s Missing Dimensions

Erik Parens
Quillette
Originally published on January 3, 2018

Here is an excerpt:

Missing the “Impartial Beneficence” Dimension of Utilitarianism

In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”

According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.

The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.

The article is here.

Your Brain Creates Your Emotions

Lisa Feldman Barrett
TED Talk
Published December 2017

Can you look at someone's face and know what they're feeling? Does everyone experience happiness, sadness and anxiety the same way? What are emotions anyway? For the past 25 years, psychology professor Lisa Feldman Barrett has mapped facial expressions, scanned brains and analyzed hundreds of physiology studies to understand what emotions really are. She shares the results of her exhaustive research -- and explains how we may have more control over our emotions than we think.

Monday, January 29, 2018

Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others

Leonardo Christov-Moore, Paul Conway, and Marco Iacoboni
Front. Integr. Neurosci., 12 December 2017

The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm.

The article is here.

Go Fund Yourself

Stephen Marche
Mother Jones
Originally published January/February 2018

Here is an excerpt:

Health care in America is the wedge of inequality: It’s the luxury everyone has to have and millions can’t afford. Sites like YouCaring have stepped in to fill the gap. The total amount in donations generated by crowdfunding sites has increased eleven­fold since the appearance of Obamacare. In 2011, sites like GoFundMe and YouCaring were generating a total of $837 million. Three years later, that number had climbed to $9.5 billion. Under the Trump administration, YouCaring expects donations to jump even higher, and the company has already seen an estimated 25 percent spike since the election, which company representatives believe is partly a response to the administration’s threats to Obamacare.

Crowdfunding companies say they’re using technology to help people helping people, the miracle of interconnectedness leading to globalized compassion. But an emerging consensus is starting to suggest a darker, more fraught reality—sites like YouCaring and GoFundMe may in fact be fueling the inequities of the American health care system, not fighting them. And they are potentially exacerbating racial, economic, and educational divides. “Crowdfunding websites have helped a lot of people,” medical researcher Jeremy Snyder wrote in a 2016 article for the Hastings Center Report, a journal focused on medical ethics. But, echoing other scholars, he warned that they’re “ultimately not a solution to injustices in the health system. Indeed, they may themselves be a cause of injustices.” Crowdfunding is yet another example of tech’s best intentions generating unseen and unfortunate outcomes.

Sunday, January 28, 2018

Republicans redefine morality as whatever Trump does

Dana Milbank
The Washington Post
Posted on January 26, 2018

Someday, likely three years from now, perhaps sooner, perhaps — gulp — later, President Trump will depart the stage.

But what will be left of us?

New evidence suggests that the damage he is doing to the culture is bigger than the man. A Quinnipiac University poll released Thursday found that two-thirds of Americans say Trump is not a good role model for children. Every component of society feels that way — men and women, old and young, black and white, highly educated or not — except for one: Republicans. By 72 to 22 percent, they say Trump is a good role model.

In marked contrast to the rest of the country, Republicans also say that Trump shares their values (82 percent) and that — get this — he “provides the United States with moral leadership” (80 percent).

And what moral leadership this role model has been providing!

The article is here.

Saturday, January 27, 2018

Evolving Morality

Joshua Greene
Aspen Ideas Festival
2017

Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.

The one-hour talk on SoundCloud is here.

Friday, January 26, 2018

Should Potential Risk of Chronic Traumatic Encephalopathy Be Discussed with Young Athletes?

Kimberly Hornbeck, Kevin Walter, and Matthew Myrvik
AMA Journal of Ethics. July 2017, Volume 19, Number 7: 686-692.

Abstract

As participation in youth sports has risen over the past two decades, so has the incidence of youth sports injuries. A common topic of concern is concussion, or mild traumatic brain injury, in young athletes and whether concussions sustained at a young age could lead to lifelong impairment such as chronic traumatic encephalopathy (CTE). While the pathway from a concussed young athlete to an adult with CTE remains unknown, current research is attempting to provide more clarity. This article discusses how health care professionals can help foster an informed, balanced decision-making process regarding participation in contact sports that involves the parents as well as the children.

The information is here.