Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, March 18, 2017

Budgets are moral documents, and Trump’s is a moral failure

Dylan Matthews
vox.com
Originally published March 16, 2017

The budget is a moral document.

It’s not clear where that phrase originates, but it’s become a staple of fiscal policy debates in DC, and for very good reason. Budgets lay out how a fifth of the national economy is going to be allocated. They make trade-offs between cancer treatment and jet fighters, scientific research and tax cuts, national parks and border fences. These are all decisions with profound moral implications. Budgets, when implemented, can lift millions out of poverty, or consign millions more to it. They can provide universal health insurance or take coverage away from those who have it. They can fuel wars or support peacekeeping.

What President Donald Trump released on Thursday is not a full budget. It doesn’t touch on taxes, or on entitlement programs like Social Security, Medicare, Medicaid, or food stamps. It concerns itself exclusively with the third of the budget that’s allocated through the annual appropriations process.

But it’s a moral document nonetheless. And the moral consequences of its implementation would be profound, and negative. The fact that it will not be implemented in full — that Congress is almost certain not to go along with many of its recommendations — in no way detracts from what it tells us about the administration’s priorities, and its ethics.

Let’s start with poverty.

The article is here.

Friday, March 17, 2017

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

BEC CREW
Science Alert
Originally published February 13, 2017

Here is an excerpt:

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," one of the team, Joel Z Leibo, told Matt Burgess at Wired.

"Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents - two of them played as wolves, and one as the prey.

The article is here.

Professional Liability for Forensic Activities: Liability Without a Treatment Relationship

Donna Vanderpool
Innov Clin Neurosci. 2016 Jul-Aug; 13(7-8): 41–44.

This ongoing column is dedicated to providing information to our readers on managing legal risks associated with medical practice. We invite questions from our readers. The answers are provided by PRMS, Inc. (www.prms.com), a manager of medical professional liability insurance programs with services that include risk management consultation, education and onsite risk management audits, and other resources to healthcare providers to help improve patient outcomes and reduce professional liability risk. The answers published in this column represent those of only one risk management consulting company. Other risk management consulting companies or insurance carriers may provide different advice, and readers should take this into consideration. The information in this column does not constitute legal advice. For legal advice, contact your personal attorney. Note: The information and recommendations in this article are applicable to physicians and other healthcare professionals so “clinician” is used to indicate all treatment team members.

Question:

In my mental health practice, I am doing more and more forensic activities, such as IMEs and expert testimony. Since I am not treating the evaluees, there should be no professional liability risk, right?

The answer and column is here.

Thursday, March 16, 2017

Mercedes-Benz’s Self-Driving Cars Would Choose Passenger Lives Over Bystanders

David Z. Morris
Fortune
Originally published Oct 15, 2016

In comments published last week by Car and Driver, Mercedes-Benz executive Christoph von Hugo said that the carmaker’s future autonomous cars will save the car’s driver and passengers, even if that means sacrificing the lives of pedestrians, in a situation where those are the only two options.

“If you know you can save at least one person, at least save that one,” von Hugo said at the Paris Motor Show. “Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.”

This doesn't mean Mercedes' robotic cars will neglect the safety of bystanders. Von Hugo, who is the carmaker’s manager of driver assistance and safety systems, is addressing the so-called “Trolley Problem”—an ethical thought experiment that applies to human drivers just as much as artificial intelligences.

The article is here.

The big moral dilemma facing self-driving cars

Steven Overly
The Washington Post
Originally published February 27, 2017

How many people could self-driving cars kill before we would no longer tolerate them?

This once-hypothetical question is now taking on greater urgency, particularly among policymakers in Washington. The promise of autonomous vehicles is that they will make our roads safer and more efficient, but no technology is without its shortcomings and unintended consequences — in this instance, potentially fatal consequences.

“What if we can build a car that’s 10 times as safe, which means 3,500 people die on the roads each year. Would we accept that?” asks John Hanson, a spokesman for the Toyota Research Institute, which is developing the automaker’s self-driving technology.

“A lot of people say if, ‘I could save one life it would be worth it.’ But in a practical manner, though, we don’t think that would be acceptable,” Hanson added.

The article is here.

Wednesday, March 15, 2017

Researchers Are Divided as FDA Moves to Regulate Gene Editing

Paul Basken
The Chronicle of Higher Education
Originally published February 22, 2017

As U.S. regulators threaten broad new limits on the use of gene-editing technology, a Utah State University researcher now engineering goats to produce spider silk in their milk isn’t particularly worried.

"They’re just trying to modernize" rules to keep up with technology, the Utah professor, Randolph V. Lewis, said of the changes proposed by the U.S. Food and Drug Administration.

But over in Minnesota, a researcher working to create cows without horns — as a way of keeping the animals safe from one another — has a far different take.

"It’s a huge overreach" by the FDA that could stifle innovation, said Scott C. Fahrenkrug, an adjunct professor of functional genomics at the University of Minnesota at Twin Cities.

The FDA is responsible for ensuring the safety of food and drugs sold to Americans, and for years it has defined that oversight to require its approval when genes are added to animals whose products might be consumed. The change it proposed last month would expand that authority to cover new technologies such as CRISPR that enable gene-specific editing, potentially enabling changes not found in any known species.

To supporters, the FDA is simply trying to keep up with the science. To detractors, it’s a reach for authority so broad as to go beyond any reasonable definition of the FDA’s mandate.

The article is here.

Will the 'hard problem' of consciousness ever be solved?

David Papineau
The Question
Originally published February 21, 2017

Here is an excerpt:

The problem, if there is one, is that we find the reduction of consciousness to brain processes very hard to believe. The flaw lies in us, not in the neuroscientific account of consciousness. Despite all the scientific evidence, we can’t free ourselves of the old-fashioned dualist idea that conscious states inhabit some extra dualist realm outside the physical brain.

Just consider how the hard problem is normally posed. Why do brain states give rise to conscious feelings? That is already dualist talk. If one thing gives rise to another, they must be separate. Fire give rise to smoke, but H2O doesn’t give rise to water. So the very terminology presupposes that the conscious mind is different from the physical brain—which of course then makes us wonder why the brain generates this mysterious extra thing. On the other hand, if only we could properly accept that the mind just is the brain, then we would be no more inclined to ask why ‘they’ go together than we ask why H20 is water.

The article is here.

There is also a 5 minute video by Massimo Pigliucci on how the hard problem is a categorical mistake on this page.

Tuesday, March 14, 2017

AI will make life meaningless, Elon Musk warns

Zoe Nauman
The Sun
Originally published February 17, 2017

Here is an excerpt:

“I think some kind of universal income will be necessary.”

“The harder challenge is how do people then have meaning – because a lot of people derive their meaning from their employment.”

“If you are not needed, if there is not a need for your labor. What’s the meaning?”

“Do you have meaning, are you useless? That is a much harder problem to deal with.”

The article is here.

“I placed too much faith in underpowered studies:” Nobel Prize winner admits mistakes

Retraction Watch
Originally posted February 21, 2017

Although it’s the right thing to do, it’s never easy to admit error — particularly when you’re an extremely high-profile scientist whose work is being dissected publicly. So while it’s not a retraction, we thought this was worth noting: A Nobel Prize-winning researcher has admitted on a blog that he relied on weak studies in a chapter of his bestselling book.

The blog — by Ulrich Schimmack, Moritz Heene, and Kamini Kesavan — critiqued the citations included in a book by Daniel Kahneman, a psychologist whose research has illuminated our understanding of how humans form judgments and make decisions and earned him half of the 2002 Nobel Prize in Economics.

The article is here.