Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, November 16, 2016

The Interrogation Decision-Making Model: A General Theoretical Framework for Confessions.

Yang, Yueran; Guyll, Max; Madon, Stephanie
Law and Human Behavior, Oct 20 , 2016.

This article presents a new model of confessions referred to as the interrogation decision-making model. This model provides a theoretical umbrella with which to understand and analyze suspects’ decisions to deny or confess guilt in the context of a custodial interrogation. The model draws upon expected utility theory to propose a mathematical account of the psychological mechanisms that not only underlie suspects’ decisions to deny or confess guilt at any specific point during an interrogation, but also how confession decisions can change over time. Findings from the extant literature pertaining to confessions are considered to demonstrate how the model offers a comprehensive and integrative framework for organizing a range of effects within a limited set of model parameters.

The article is here.

Supervising AI Growth

by Tucker Davey
The Future of Life
Originally posted October 26, 2016

Here is an excerpt:

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, “it’s effectively just one machine evaluating another machine’s behavior.”

Ideally, “each time you build a more powerful machine, it effectively models human values and does what humans would like,” says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldn’t like.

In order to address these control issues, Christiano is working on an “end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant.” His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

The article is here.

Tuesday, November 15, 2016

The Inevitable Evolution of Bad Science

Ed Yong
The Atlantic
Originally published September 21, 2016

Here is an excerpt:

In the model, as in real academia, positive results are easier to publish than negative one, and labs that publish more get more prestige, funding, and students. They also pass their practices on. With every generation, one of the oldest labs dies off, while one of the most productive one reproduces, creating an offspring that mimics the research style of the parent. That’s the equivalent of a student from a successful team starting a lab of their own.

Over time, and across many simulations, the virtual labs inexorably slid towards less effort, poorer methods, and almost entirely unreliable results. And here’s the important thing: Unlike the hypothetical researcher I conjured up earlier, none of these simulated scientists are actively trying to cheat. They used no strategy, and they behaved with integrity. And yet, the community naturally slid towards poorer methods. What the model shows is that a world that rewards scientists for publications above all else—a world not unlike this one—naturally selects for weak science.

“The model may even be optimistic,” says Brian Nosek from the Center of Open Science, because it doesn’t account for our unfortunate tendency to justify and defend the status quo. He notes, for example, that studies in the social and biological sciences are, on average, woefully underpowered—they are too small to find reliable results.

The article is here.

Scientists “Switch Off” Self-Control Using Brain Stimulation

By Catherine Caruso
Scientific American
Originally published on October 19, 2016

Imagine you are faced with the classic thought experiment dilemma: You can take a pile of money now or wait and get an even bigger stash of cash later on. Which option do you choose? Your level of self-control, researchers have found, may have to do with a region of the brain that lets us take the perspective of others—including that of our future self.

A study, published today in Science Advances, found that when scientists used noninvasive brain stimulation to disrupt a brain region called the temporoparietal junction (TPJ), people appeared less able to see things from the point of view of their future selves or of another person, and consequently were less likely to share money with others and more inclined to opt for immediate cash instead of waiting for a larger bounty at a later date.

The TPJ, which is located where the temporal and parietal lobes meet, plays an important role in social functioning, particularly in our ability to understand situations from the perspectives of other people. However, according to Alexander Soutschek, an economist at the University of Zurich and lead author on the study, previous research on self-control and delayed gratification has focused instead on the prefrontal brain regions involved in impulse control.

The article is here.

Monday, November 14, 2016

Walter Sinnott-Armstrong discusses artificial intelligence and morality

By Joyce Er
Duke Chronicle
Originally published October 25, 2016

How do we create artificial intelligence that serves mankind’s purposes? Walter Sinnott-Armstrong, Chauncey Stillman professor of practical ethics, led a discussion Monday on the subject.

Through an open discussion funded by the Future of Life Institute, Sinnott-Armstrong raised issues at the intersection of computer science and ethical philosophy. Among the tricky questions Sinnott-Armstrong tackled were programming artificial intelligence so that it would not eliminate the human race as well as the legal and moral issues involving self-driving cars.

Sinnott-Armstrong noted that artificial intelligence and morality are not as irreconcilable as some might believe, despite one being regarded as highly structured and the other seen as highly subjective. He highlighted various uses for artificial intelligence in resolving moral conflicts, such as improving criminal justice and locating terrorists.

The article is here.

A Bright Robot Future Awaits, Once This Downer Election Is Over

By Andrew Mayeda
Bloomberg
Originally published October 24, 2016

Here is an excerpt:

‘Singularity Is Near’

An hour’s drive away, in San Francisco, the influx of tech workers has helped push the median single-family home price to $1.26 million. Private buses carry them to jobs at Apple Inc., Alphabet Inc.’s Google, or Facebook. Meanwhile, one former mayor has proposed using a decommissioned aircraft carrier to house the city’s homeless, who throng the sidewalks along Market Street, home to Uber and Twitter Inc.

How much will the “second machine age” deepen such divisions? Last month, a trio of International Monetary Fund economists came up with some chilling answers. Even if humans retain their creative edge over robots, they found, it will likely take two decades before productivity gains outweigh the downward pressure on wages from automation; meanwhile, “inequality will be worse, possibly dramatically so.”

And if the robots become perfect substitutes, the paper envisages an extreme scenario in which labor becomes wholly redundant as “capital takes over the entire economy.” The IMF economists even invoke futurist Ray Kurzweil’s 2006 bestseller, “The Singularity Is Near.”

Silicon Valley executives say alarm bells have been ringing for decades about job-killing technology, and they’re usually false alarms.

The article is here.

Sunday, November 13, 2016

The VSED Exit: A Way to Speed Up Dying, Without Asking Permission

by Paula Span
The New York Times
Originally published October 21, 2016

Here is an excerpt:

In end-of-life circles, this option is called VSED (usually pronounced VEEsed), for voluntarily stopping eating and drinking. It causes death by dehydration, usually within seven to 14 days. To people with serious illnesses who want to hasten their deaths, a small but determined group, VSED can sound like a reasonable exit strategy.

Unlike aid with dying, now legal in five states, it doesn't require governmental action or physicians' authorization. Patients don't need a terminal diagnosis, and they don't have to prove mental capacity. They do need resolve.

"It's for strong-willed, independent people with very supportive families," said Dr. Timothy Quill, a veteran palliative care physician at the University of Rochester Medical Center.

He was speaking at a conference on VSED, billed as the nation's first, at Seattle University School of Law this month. It drew about 220 participants -- physicians and nurses, lawyers, bioethicists, academics of various stripes, theologians, hospice staff. (Disclosure: I was also a speaker, and received an honorarium and some travel costs.)

What the gathering made clear was that much about VSED remains unclear.

Is it legal?

For a mentally competent patient, able to grasp and communicate decisions, probably so, said Thaddeus Pope, director of the Health Law Institute at Mitchell Hamline School of Law in St. Paul, Minn. His research has found no laws expressly prohibiting competent people from VSED, and the right to refuse medical and health care intervention is well established.

The article is here.

Saturday, November 12, 2016

Why Suicide Keeps Rising for Middle-Aged Men

By Lisa Esposito
US News and World Report
Originally published Oct. 19, 2016

Suicide rates in the U.S. continue to rise, and working-age adults – particularly men – make up the largest increase, according to the Centers for Disease Control and Prevention. Middle-aged men in the 45 to 60 range experienced a 43 percent increase in suicide deaths from 1997 to 2014, and the rise has been even sharper since 2005. Untreated mental illness, the Great Recession, work-related issues and men's reluctance to reach out for help converge to put them at greater risk for taking their own lives. And because men are more likely than women to use a gun, their suicide attempts are more often fatal.

Historically, suicide rates have always been higher for men, says Dr. Alex Crosby, surveillance branch chief in the CDC's Division of Violence Prevention. "But what we've seen in these past few years is rates have been going up among males and females," he told journalists attending a National Press Foundation conference in September. "Still, rates are higher among males – about four times higher." For suicide attempts that don't prove fatal, the balance changes, with two to three times more females than males trying to take their own lives.

"In about half of the suicides in the United States, the mechanism or the method was a firearm," Crosby says. Males are more likely to use firearms, while poison is more common for females. However, he notes, "When you look at suicide in the military, females choose firearms almost as much as men."

The article is here.

Moral Dilemmas and Guilt

Patricia S. Greenspan
Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition
Vol. 43, No. 1 (Jan., 1983), pp. 117-125

In 'Moral dilemmas and ethical consistency', Ruth Marcus argues that moral dilemmas are 'real': there are cases where an agent ought to perform each of two incompatible actions.  Thus, a doctor with two patients equally in need of his attention ought to save each, even though he cannot save both. By
claiming that his dilemma is real, I take Marcus to be denying (rightly) that it is merely epistemic - a matter of uncertainty as to which patient to save.  Rather, she wants to say, the moral code yields two opposing recommendations, both telling him what he ought to do. The code is not inconsistent,
however, as long as its rules are all obeyable in some possible world; and it is not deficient as a guide to action, as long as it contains a second order principle, directing an agent to avoid situations of conflict. Where a dilemma does arise, though, the agent is guilty no matter what he does.

This last point seems implausible for the doctor's case; but here I shall consider a case which does fit Marcus's comments on guilt - if not all her views on the nature of moral dilemma.  I think that she errs, first of all, in counting as a dilemma any case where there are some considerations favoring each of two incompatible actions, even if it is clear that one of them is right. For instance, in the case of withholding weapons from someone who has gone mad, it would be unreasonable for the agent to feel guilty about breaking his promise, since he has done exactly as he should. But secondly, even in
Marcus's 'strong' cases, I do not think that dilemmas must be taken as yielding opposing all-things-considered ought-judgments, viewed as recommendations for action, rather than stopping with judgments of obligation, or reports of commitments. The latter do not imply 'can' (in the sense of physical possibility); and where they are jointly unsatisfiable, and supported by reasons of equal weight, I think we should say that the moral code yields no particular recommendations, rather than two which conflict.

The article is here.