Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, May 13, 2017

Justices Blast One-Stop-Shop Experts in Alabama

Tim Ryan
Courthouse News
Originally posted April 24, 2017

The Supreme Court’s liberal justices shredded an argument by Alabama’s solicitor general Monday that criminal defendants are not entitled to a mental health expert separate from the ones tapped by prosecutors.

McWilliams v. Dunn, the case the Supreme Court heard this morning, is nested inside the court’s 1984 decision in Ake v. Oklahoma, which held that poor criminal defendants using a defense of insanity are entitled to an expert to help support their claim.

A split has emerged in the 30 years since the decision, with some states deciding one expert helping both the prosecution and defense satisfies the requirement, and others choosing to assign an expert for the defendant to use exclusively.

The article is here.

Friday, May 12, 2017

US Suicide Rates Display Growing Geographic Disparity.

JAMA.
2017;317(16):1616. doi:10.1001/jama.2017.4076

As the overall US suicide rate increases, a CDC study showed that the trend toward higher rates in less populated parts of the country and lower rates in large urban areas has become more pronounced.

Using data from the National Vital Statistics System and the US Census Bureau, the researchers reported that from 1999 to 2015, the annual suicide rate increased by 14%, from 12.6 to 14.4 per 100, 000 US residents aged 10 years or older.

(cut)

Higher suicide rates in less urban areas could be linked with limited access to mental health care, the opioid overdose epidemic, and social isolation, the investigators suggested. The 2007-2009 economic recession may have caused the sharp upswing, they added, because rural areas and small towns were hardest hit.

The article is here

Physicians, Not Conscripts — Conscientious Objection in Health Care

Ronit Y. Stahl and Ezekiel J. Emanuel
N Engl J Med 2017; 376:1380-1385

“Conscience clause” legislation has proliferated in recent years, extending the legal rights of health care professionals to cite their personal religious or moral beliefs as a reason to opt out of performing specific procedures or caring for particular patients. Physicians can refuse to perform abortions or in vitro fertilization. Nurses can refuse to aid in end-of-life care. Pharmacists can refuse to fill prescriptions for contraception. More recently, state legislation has enabled counselors and therapists to refuse to treat lesbian, gay, bisexual, and transgender (LGBT) patients, and in December, a federal judge issued a nationwide injunction against Section 1557 of the Affordable Care Act, which forbids discrimination on the basis of gender identity or termination of a pregnancy.

The article is here, and you need a subscription.

Here is an excerpt:

Objection to providing patients interventions that are at the core of medical practice – interventions that the profession deems to be effective, ethical, and standard treatments – is unjustifiable (AMA Code of Medical Ethics [Opinion 11.2.2]10).

Making the patient paramount means offering and providing accepted medical interventions in accordance with patients’ reasoned decisions. Thus, a health care professional cannot deny patients access to medications for mental health conditions, sexual dysfunction, or contraception on the basis of their conscience, since these drugs are professionally accepted as appropriate medical interventions.

Thursday, May 11, 2017

The Implications of Libertarianism for Compulsory Vaccination

Justin Bernstein
BMJ Blogs
Originally posted April 24, 2017

Here is an excerpt:

Some libertarians, however, attempt to avoid the controversial conclusion that libertarianism is incompatible with compulsory vaccination. In my recent paper, “The Case Against Libertarian Arguments for Compulsory Vaccination,” I argue that such attempts are unsuccessful, and so libertarians must either develop new arguments, or join Senator Paul in opposing compulsory vaccination.

How might a libertarian try to defend compulsory vaccination? One argument is that going unvaccinated exposes others to risk, and this violates their rights. Since the state is permitted to use coercive measures to protect rights, the state may require parents to vaccinate their children. But for libertarians, this argument has two shortcomings. First, there are other, far riskier activities that the libertarian prohibits the government from regulating. For instance, owning and using automobiles or firearms imposes far more significant risk than going unvaccinated, but libertarians defend our rights to own and use automobiles and firearms. Second, one individual going unvaccinated poses very little risk; the risk eventuates only if many collectively go unvaccinated, thereby endangering herd immunity. Imposing such an independently small risk hardly seems to be a rights violation.

The entire blog post is here.

Is There a Duty to Use Moral Neurointerventions?

Michelle Ciurria
Topoi (2017).
doi:10.1007/s11245-017-9486-4

Abstract

Do we have a duty to use moral neurointerventions to correct deficits in our moral psychology? On their surface, these technologies appear to pose worrisome risks to valuable dimensions of the self, and these risks could conceivably weigh against any prima facie moral duty we have to use these technologies. Focquaert and Schermer (Neuroethics 8(2):139–151, 2015) argue that neurointerventions pose special risks to the self because they operate passively on the subject’s brain, without her active participation, unlike ‘active’ interventions. Some neurointerventions, however, appear to be relatively unproblematic, and some appear to preserve the agent’s sense of self precisely because they operate passively. In this paper, I propose three conditions that need to be met for a medical intervention to be considered low-risk, and I say that these conditions cut across the active/passive divide. A low-risk intervention must: (i) pass pre-clinical and clinical trials, (ii) fare well in post-clinical studies, and (iii) be subject to regulations protecting informed consent. If an intervention passes these tests, its risks do not provide strong countervailing reasons against our prima facie duty to undergo the intervention.

The article is here.

Wednesday, May 10, 2017

How do you punish a criminal robot?

Christopher Markou
The Independent
Originally posted on April 20, 2017

Here is an excerpt:

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

The article is here.

Who Decides When a Patient Can’t? Statutes on Alternate Decision Makers

Erin S. DeMartino and others
The New England Journal of Medicine
DOI: 10.1056/NEJMms1611497

Many patients cannot make their own medical decisions, having lost what is called decisional capacity. The estimated prevalence of decisional incapacity approaches 40% among adult medical
inpatients and residential hospice patients and exceeds 90% among adults in some intensive care
units.3,4 Patients who lack capacity may guide decisions regarding their own care through an
advance directive, a legal document that records treatment preferences or designates a durable
power of attorney for health care, or both. Unfortunately,the rate of completion of advance directives
in the general U.S. population hovers around 20 to 29%, creating uncertainty about who will
fill the alternate decision-maker role for many patients.

There is broad ethical consensus that other persons may make life-and-death decisions on
behalf of patients who lack decisional capacity. Over the past few decades, many states have enacted
legislation designed to delineate decisionmaking authority for patients who lack advance directives. Yet the 50 U.S. states and the District of Columbia vary in their procedures for appointing and challenging default surrogates, the attributes they require of such persons, their priority ranking of possible decision makers, and dispute resolution. These differences have important implications for clinicians, patients, and public health.

The article is here.

Tuesday, May 9, 2017

Ethics experts question Kushner relatives pushing White House connections in China

Allan Smith
Business Insider
Originally published May 8, 2017

Ethics experts criticized White House senior adviser Jared Kushner's relatives for using White House connections to enhance a presentation to Chinese investors last weekend.

Members of Kushner's family gave multiple presentations in China detailing an opportunity to "invest $500,000 and immigrate to the United States" through a controversial visa program and promoting ties to Kushner and President Donald Trump, according to media reports.

Richard Painter, who was President George W. Bush's top ethics lawyer from 2005 to 2007 and is now a professor at the University of Minnesota, told Business Insider the presentation was "obviously completely inappropriate."

He added that the Kushner family "ought to be disqualified" from the EB-5 visa program they were promoting. The visa is awarded to foreign investors who invest at least $500,000 in US projects that create at least 10 full-time jobs.

The article is here.

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Cade Metz
Wired Magazine
Originally published February 1, 2017

Here is an excerpt:

Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many. Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world’s biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself. Google’s DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule, but there’s a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.

Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn’t—playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. “We give the AI a description of the game. We don’t tell it how to play,” says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. “It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.”

The article is here.