Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, March 21, 2017

Why can 12-year-olds still get married in the United States?

Fraidy Reiss
The Washington Post
Originally published February 10, 2017

Here is an excerpt:

Unchained At Last, a nonprofit I founded to help women resist or escape forced marriage in the United States, spent the past year collecting marriage license data from 2000 to 2010, the most recent year for which most states were able to provide information. We learned that in 38 states, more than 167,000 children — almost all of them girls, some as young 12 — were married during that period, mostly to men 18 or older. Twelve states and the District of Columbia were unable to provide information on how many children had married there in that decade. Based on the correlation we identified between state population and child marriage, we estimated that the total number of children wed in America between 2000 and 2010 was nearly 248,000.

Despite these alarming numbers, and despite the documented consequences of early marriages, including negative effects on health and education and an increased likelihood of domestic violence, some state lawmakers have resisted passing legislation to end child marriage — because they wrongly fear that such measures might unlawfully stifle religious freedom or because they cling to the notion that marriage is the best solution for a teen pregnancy.

The article is here.

Ethical concerns for telemental health therapy amidst governmental surveillance.

Samuel D. Lustgarten and Alexander J. Colbow
American Psychologist, Vol 72(2), Feb-Mar 2017, 159-170.

Abstract

Technology, infrastructure, governmental support, and interest in mental health accessibility have led to a burgeoning field of telemental health therapy (TMHT). Psychologists can now provide therapy via computers at great distances and little cost for parties involved. Growth of TMHT within the U.S. Department of Veterans Affairs and among psychologists surveyed by the American Psychological Association (APA) suggests optimism in this provision of services (Godleski, Darkins, & Peters, 2012; Jacobsen & Kohout, 2010). Despite these advances, psychologists using technology must keep abreast of potential limitations to privacy and confidentiality. However, no scholarly articles have appraised the ramifications of recent government surveillance disclosures (e.g., “The NSA Files”; Greenwald, 2013) and how they might affect TMHT usage within the field of psychology. This article reviews the current state of TMHT in psychology, APA’s guidelines, current governmental threats to client privacy, and other ethical ramifications that might result. Best practices for the field of psychology are proposed.

The article is here.

Monday, March 20, 2017

When Evidence Says No, But Doctors Say Yes

David Epstein
ProPublica
Originally published February 22, 2017

Here is an excerpt:

When you visit a doctor, you probably assume the treatment you receive is backed by evidence from medical research. Surely, the drug you’re prescribed or the surgery you’ll undergo wouldn’t be so common if it didn’t work, right?

For all the truly wondrous developments of modern medicine — imaging technologies that enable precision surgery, routine organ transplants, care that transforms premature infants into perfectly healthy kids, and remarkable chemotherapy treatments, to name a few — it is distressingly ordinary for patients to get treatments that research has shown are ineffective or even dangerous. Sometimes doctors simply haven’t kept up with the science. Other times doctors know the state of play perfectly well but continue to deliver these treatments because it’s profitable — or even because they’re popular and patients demand them. Some procedures are implemented based on studies that did not prove whether they really worked in the first place. Others were initially supported by evidence but then were contradicted by better evidence, and yet these procedures have remained the standards of care for years, or decades.

The article is here.

The Enforcement of Moral Boundaries Promotes Cooperation and Prosocial Behavior in Groups

Brent Simpson, Robb Willer & Ashley Harrell
Scientific Reports 7, Article number: 42844 (2017)

Abstract

The threat of free-riding makes the marshalling of cooperation from group members a fundamental challenge of social life. Where classical social science theory saw the enforcement of moral boundaries as a critical way by which group members regulate one another’s self-interest and build cooperation, moral judgments have most often been studied as processes internal to individuals. Here we investigate how the interpersonal expression of positive and negative moral judgments encourages cooperation in groups and prosocial behavior between group members. In a laboratory experiment, groups whose members could make moral judgments achieved greater cooperation than groups with no capacity to sanction, levels comparable to those of groups featuring costly material sanctions. In addition, members of moral judgment groups subsequently showed more interpersonal trust, trustworthiness, and generosity than all other groups. These findings extend prior work on peer enforcement, highlighting how the enforcement of moral boundaries offers an efficient solution to cooperation problems and promotes prosocial behavior between group members.

The article is here.

Sunday, March 19, 2017

Revamping the US Federal Common Rule: Modernizing Human Participant Research Regulations

James G. Hodge Jr. and Lawrence O. Gostin
JAMA. Published online February 22, 2017

On January 19, 2017, the Office for Human Research Protections (OHRP), Department of Health and Human Services, and 15 federal agencies published a final rule to modernize the Federal Policy for the Protection of Human Subjects (known as the “Common Rule”).1 Initially introduced more than a quarter century ago, the Common Rule predated modern scientific methods and findings, notably human genome research.

Research enterprises now encompass vast multicenter trials in both academia and the private sector. The volume, types, and availability of public/private data and biospecimens have increased exponentially. Federal agencies demanded more accountability, research investigators sought more flexibility, and human participants desired more control over research. Most rule changes become effective in 2018, giving institutions time for implementation.

The article is here.

Saturday, March 18, 2017

Budgets are moral documents, and Trump’s is a moral failure

Dylan Matthews
vox.com
Originally published March 16, 2017

The budget is a moral document.

It’s not clear where that phrase originates, but it’s become a staple of fiscal policy debates in DC, and for very good reason. Budgets lay out how a fifth of the national economy is going to be allocated. They make trade-offs between cancer treatment and jet fighters, scientific research and tax cuts, national parks and border fences. These are all decisions with profound moral implications. Budgets, when implemented, can lift millions out of poverty, or consign millions more to it. They can provide universal health insurance or take coverage away from those who have it. They can fuel wars or support peacekeeping.

What President Donald Trump released on Thursday is not a full budget. It doesn’t touch on taxes, or on entitlement programs like Social Security, Medicare, Medicaid, or food stamps. It concerns itself exclusively with the third of the budget that’s allocated through the annual appropriations process.

But it’s a moral document nonetheless. And the moral consequences of its implementation would be profound, and negative. The fact that it will not be implemented in full — that Congress is almost certain not to go along with many of its recommendations — in no way detracts from what it tells us about the administration’s priorities, and its ethics.

Let’s start with poverty.

The article is here.

Friday, March 17, 2017

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

BEC CREW
Science Alert
Originally published February 13, 2017

Here is an excerpt:

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," one of the team, Joel Z Leibo, told Matt Burgess at Wired.

"Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents - two of them played as wolves, and one as the prey.

The article is here.

Professional Liability for Forensic Activities: Liability Without a Treatment Relationship

Donna Vanderpool
Innov Clin Neurosci. 2016 Jul-Aug; 13(7-8): 41–44.

This ongoing column is dedicated to providing information to our readers on managing legal risks associated with medical practice. We invite questions from our readers. The answers are provided by PRMS, Inc. (www.prms.com), a manager of medical professional liability insurance programs with services that include risk management consultation, education and onsite risk management audits, and other resources to healthcare providers to help improve patient outcomes and reduce professional liability risk. The answers published in this column represent those of only one risk management consulting company. Other risk management consulting companies or insurance carriers may provide different advice, and readers should take this into consideration. The information in this column does not constitute legal advice. For legal advice, contact your personal attorney. Note: The information and recommendations in this article are applicable to physicians and other healthcare professionals so “clinician” is used to indicate all treatment team members.

Question:

In my mental health practice, I am doing more and more forensic activities, such as IMEs and expert testimony. Since I am not treating the evaluees, there should be no professional liability risk, right?

The answer and column is here.

Thursday, March 16, 2017

Mercedes-Benz’s Self-Driving Cars Would Choose Passenger Lives Over Bystanders

David Z. Morris
Fortune
Originally published Oct 15, 2016

In comments published last week by Car and Driver, Mercedes-Benz executive Christoph von Hugo said that the carmaker’s future autonomous cars will save the car’s driver and passengers, even if that means sacrificing the lives of pedestrians, in a situation where those are the only two options.

“If you know you can save at least one person, at least save that one,” von Hugo said at the Paris Motor Show. “Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.”

This doesn't mean Mercedes' robotic cars will neglect the safety of bystanders. Von Hugo, who is the carmaker’s manager of driver assistance and safety systems, is addressing the so-called “Trolley Problem”—an ethical thought experiment that applies to human drivers just as much as artificial intelligences.

The article is here.