Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, August 19, 2019

The Case Against A.I. Controlling Our Moral Compass

Image result for moral compassBrian Gallagher
ethicalsystems.org
Originally published June 25, 2019


Here is an excerpt:

Morality, the researchers found, isn’t like any other decision space. People were averse to machines having the power to choose what to do in life and death situations—specifically in driving, legal, medical, and military contexts. This hinged on their perception of machine minds as incomplete, or lacking in agency (the capacity to reason, plan, and communicate effectively) and subjective experience (the possession of a human-like consciousness, with the ability to empathize and to feel pain and other emotions).

For example, when the researchers presented subjects with hypothetical medical and military situations—where a human or machine would decide on a surgery as well as a missile strike, and the surgery and strike succeeded—subjects still found the machine’s decision less permissible, due to its lack of agency and subjective experience relative to the human. Not having the appropriate sort of mind, it seems, disqualifies machines, in the judgement of these subjects, from making moral decisions even if they are the same decisions that a human made. Having a machine sound human, with an emotional and expressive voice, and claim to experience emotion, doesn’t help—people found a compassionate-sounding machine just as unqualified for moral choice as one that spoke robotically.

Only in certain circumstances would a machine’s moral choice trump a human’s. People preferred an expert machine’s decision over an average doctor’s, for instance, but just barely. Bigman and Gray also found that some people are willing to have machines support human moral decision-making as advisors. A substantial portion of subjects, 32 percent, were even against that, though, “demonstrating the tenacious aversion to machine moral decision-making,” the researchers wrote. The results “suggest that reducing the aversion to machine moral decision-making is not easy, and depends upon making very salient the expertise of machines and the overriding authority of humans—and even then, it still lingers.”

The info is here.

The evolution of moral cognition

Leda Cosmides, Ricardo Guzmán, and John Tooby
The Routledge Handbook of Moral Epistemology - Chapter 9

1. Introduction

Moral concepts, judgments, sentiments, and emotions pervade human social life. We consider certain actions obligatory, permitted, or forbidden, recognize when someone is entitled to a resource, and evaluate character using morally tinged concepts such as cheater, free rider, cooperative, and trustworthy. Attitudes, actions, laws, and institutions can strike us as fair, unjust, praiseworthy, or punishable: moral judgments. Morally relevant sentiments color our experiences—empathy for another’s pain, sympathy for their loss, disgust at their transgressions—and our decisions are influenced by feelings of loyalty, altruism, warmth, and compassion.  Full blown moral emotions organize our reactions—anger toward displays of disrespect, guilt over harming those we care about, gratitude for those who sacrifice on our behalf, outrage at those who harm others with impunity. A newly reinvigorated field, moral psychology, is investigating the genesis and content of these concepts, judgments, sentiments, and emotions.

This handbook reflects the field’s intellectual diversity: Moral psychology has attracted psychologists (cognitive, social, developmental), philosophers, neuroscientists, evolutionary biologists,  primatologists, economists, sociologists, anthropologists, and political scientists.

The chapter can be found here.

Sunday, August 18, 2019

Social physics

Despite the vagaries of free will and circumstance, human behaviour in bulk is far more predictable than we like to imagine

Ian Stewart
www.aeon.co
Originally posted July 9, 2019

Here is an excerpt:

Polling organisations use a variety of methods to try to minimise these sources of error. Many of these methods are mathematical, but psychological and other factors also come into consideration. Most of us know of stories where polls have confidently indicated the wrong result, and it seems to be happening more often. Special factors are sometimes invoked to ‘explain’ why, such as a sudden late swing in opinion, or people deliberately lying to make the opposition think it’s going to win and become complacent. Nevertheless, when performed competently, polling has a fairly good track-record overall. It provides a useful tool for reducing uncertainty. Exit polls, where people are asked whom they voted for soon after they cast their vote, are often very accurate, giving the correct result long before the official vote count reveals it, and can’t influence the result.

Today, the term ‘social physics’ has acquired a less metaphorical meaning. Rapid progress in information technology has led to the ‘big data’ revolution, in which gigantic quantities of information can be obtained and processed. Patterns of human behaviour can be extracted from records of credit-card purchases, telephone calls and emails. Words suddenly becoming more common on social media, such as ‘demagogue’ during the 2016 US presidential election, can be clues to hot political issues.

The mathematical challenge is to find effective ways to extract meaningful patterns from masses of unstructured information, and many new methods.

The info is here.

Saturday, August 17, 2019

DC Types Have Been Flocking to Shrinks Ever Since Trump Won.

And a Lot of the Therapists Are Miserable.

Britt Peterson
www.washingtonian.com
Originally published July 14 2019

Here two excerpts:

In Washington, the malaise appears especially pronounced. I spent the last several months talking to nearly two dozen local therapists who described skyrocketing levels of interest in their services. They told me about cases of ordinary stress blossoming into clinical conditions, patients who can’t get through a session without invoking the President’s name, couples and families falling apart over politics—a broad category of concerns that one practitioner, Beth Sperber Richie, says she and her colleagues have come to categorize as “Trump trauma.”

In one sense, that’s been good news for the people who help keep us sane: Their calendars are full. But Trump trauma has also created particular clinical challenges for therapists like Guttman and her students. It’s one thing to listen to a client discuss a horrible personal incident. It’s another when you’re experiencing the same collective trauma.

“I’ve been a therapist for a long time,” says Delishia Pittman, an assistant professor at George Washington University who has been in private practice for 14 years. “And this has been the most taxing two years of my entire career.”

(cut)

For many, in other words, Trump-related anxieties originate from something more serious than mere differences about policy. The therapists I spoke to are equally upset—living through one unnerving news cycle after another, personally experiencing the same issues as their patients in real time while being expected to offer solace and guidance. As Bindeman told her clients the day after Trump’s election, “I’m processing it just as you are, so I’m not sure I can give you the distance that might be useful.”

This is a unique situation in therapy, where you’re normally discussing events in the client’s private life. How do you counsel a sexual-assault victim agitated by the Access Hollywood tape, for example, when the tape has also disturbed you—and when talking about it all day only upsets you further? How about a client who echoes your own fears about climate change or the treatment of minorities or the government shutdown, which had a financial impact on therapists just as it did everyone else?

Again and again, practitioners described different versions of this problem.

The info is here.

Friday, August 16, 2019

Physicians struggle with their own self-care, survey finds

Jeff Lagasse
Healthcare Finance
Originally published July 26, 2019

Despite believing that self-care is a vitally important part of health and overall well-being, many physicians overlook their own self-care, according to a new survey conducted by The Harris Poll on behalf of Samueli Integrative Health Programs. Lack of time, job demands, family demands, being too tired and burnout are the most common reasons for not practicing their desired amount of self-care.

The authors said that while most doctors acknowledge the physical, mental and social importance of self-care, many are falling short, perhaps contributing to the epidemic of physician burnout currently permating the nation's healthcare system.

What's The Impact

The survey -- involving more than 300 family medicine and internal medicine physicians as well as more than 1,000 U.S. adults ages 18 and older -- found that although 80 percent of physicians say practicing self-care is "very important" to them personally, only 57 percent practice it "often" and about one-third (36%) do so only "sometimes."

Lack of time is the primary reason physicians say they aren't able to practice their desired amount of self-care (72%). Other barriers include mounting job demands (59%) and burnout (25%). Additionally, almost half of physicians (45%) say family demands interfere with their ability to practice self-care, and 20 percent say they feel guilty taking time for themselves.

The info is here.

Federal Watchdog Reports EPA Ignored Ethics Rules

Alyssa Danigelis
www.environmentalleader.com
Originally published July 17, 2019

The Environmental Protection Agency failed to comply with federal ethics rules for appointing advisory committee members, the General Accounting Office concluded this week. President Trump’s EPA skipped disclosure requirements for new committee members last year, according to the federal watchdog.

Led by Andrew Wheeler, the EPA currently manages 22 committees that advise the agency on a wide range of issues, including developing regulations and managing research programs.

However, in fiscal year 2018, the agency didn’t follow a key step in its process for appointing 20 committee members to the Science Advisory Board (SAB) and Clean Air Scientific Advisory Committee (CASAC), the report says.

“SAB is the agency’s largest committee and CASAC is responsible for, among other things, reviewing national ambient air-quality standards,” the report noted. “In addition, when reviewing the step in EPA’s appointment process related specifically to financial disclosure reporting, we found that EPA did not consistently ensure that [special government employees] appointed to advisory committees met federal financial disclosure requirements.”

The GAO also pointed out that the number of committee members affiliated with academic institutions shrank.

The info is here.

Thursday, August 15, 2019

World’s first ever human-monkey hybrid grown in lab in China

Henry Holloway
www.dailystar.co.uk
Originally posted August 1, 2019

Here is an excerpt:

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The team, made up of members of the Salk Institute in the United States and the Murcia Catholic University, genetically modified the monkey embryos.

Researchers deactivates the genes which form organs, and replaced them with human stem cells.

And it is hoped that one day these hybrid-grown organs will be able to be translated into humans.

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The info is here.

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them

Elizabeth Lopatto
www.theverge.com
Originally published July 16, 2019

Here is an excerpt:

“It’s not going to be suddenly Neuralink will have this neural lace and start taking over people’s brains,” Musk said. “Ultimately” he wants “to achieve a symbiosis with artificial intelligence.” And that even in a “benign scenario,” humans would be “left behind.” Hence, he wants to create technology that allows a “merging with AI.” He later added “we are a brain in a vat, and that vat is our skull,” and so the goal is to read neural spikes from that brain.

The first paralyzed person to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle, who had a spinal cord injury, played Pong using only his mind; the basic movement required took him only four days to master, he told The New York Times. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

“Neuralink didn’t come out of nowhere, there’s a long history of academic research here,” Hodak said at the presentation on Tuesday. “We’re, in the greatest sense, building on the shoulders of giants.” However, none of the existing technologies fit Neuralink’s goal of directly reading neural spikes in a minimally invasive way.

The system presented today, if it’s functional, may be a substantial advance over older technology. BrainGate relied on the Utah Array, a series of stiff needles that allows for up to 128 electrode channels. Not only is that fewer channels than Neuralink is promising — meaning less data from the brain is being picked up — it’s also stiffer than Neuralink’s threads. That’s a problem for long-term functionality: the brain shifts in the skull but the needles of the array don’t, leading to damage. The thin polymers Neuralink is using may solve that problem.

The info is here.

Wednesday, August 14, 2019

Getting AI ethics wrong could 'annihilate technical progress'

Richard Gray
TechXplore
Originally published July 30, 2019

Here is an excerpt:

Biases

But these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.

"Transparency of these algorithms is also a problem," said Prof. Stahl. "These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened." This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque 'black box' AI algorithms to inform sentencing decisions or judgements about a person's guilt.

The next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.

But one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.

"Most people today don't understand the technology because it is very complex, opaque and fast moving," he said. "For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind."

The info is here.