Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, June 11, 2018

Discerning bias in forensic psychological reports in insanity cases

Tess M. S. Neal
Behavioral Sciences & the Law, (2018).

Abstract

This project began as an attempt to develop systematic, measurable indicators of bias in written forensic mental health evaluations focused on the issue of insanity. Although forensic clinicians observed in this study did vary systematically in their report‐writing behaviors on several of the indicators of interest, the data are most useful in demonstrating how and why bias is hard to ferret out. Naturalistic data were used in this project (i.e., 122 real forensic insanity reports), which in some ways is a strength. However, given the nature of bias and the problem of inferring whether a particular judgment is biased, naturalistic data also made arriving at conclusions about bias difficult. This paper describes the nature of bias – including why it is a special problem in insanity evaluations – and why it is hard to study and document. It details the efforts made in an attempt to find systematic indicators of potential bias, and how this effort was successful in part, but also how and why it failed. The lessons these efforts yield for future research are described. We close with a discussion of the limitations of this study and future directions for work in this area.

The research is here.

Can Morality Be Engineered In Artificial General Intelligence Systems?

Abhijeet Katte
Analytics India Magazine
Originally published May 10, 2018

Here is an excerpt:

This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.

Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have  a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.

The information is here.

Sunday, June 10, 2018

Can precision medicine do for depression what it’s done for cancer? It won’t be easy

Megan Thielking
Statnews.com
Originally posted May 9, 2018

At a growing number of research centers across the country, scientists are scanning brains of patients with depression, drawing their blood, asking about their symptoms, and then scouring that data for patterns. The goal: pinpoint subtypes of depression, then figure out which treatments have the best chance of success for each particular variant of the disease.

The idea of precision medicine for depression is quickly gaining ground — just last month, Stanford announced it is establishing a Center for Precision Mental Health and Wellness. And depression is one of many diseases targeted by All of Us, the National Institute of Health campaign launched this month to collect DNA and other data from 1 million Americans. Doctors have been treating cancer patients this way for years, but the underlying biology of mental illness is not as well understood.

“There’s not currently a way to match people with treatment,” said Dr. Madhukar Trivedi, a depression researcher at the University of Texas Southwestern Medical Center. “That’s why this is a very exciting field to research.”

The information is here.

Saturday, June 9, 2018

Doing good vs. avoiding bad in prosocial choice

 A refined test and extension of the morality preference hypothesis

Ben Tappin and Valerio Capraro
Preprint

Abstract

Prosociality is fundamental to the success of human social life, and, accordingly, much research has attempted to explain human prosocial behavior. Capraro and Rand (2018) recently advanced the hypothesis that prosocial behavior in anonymous, one-shot interactions is not driven by outcome-based social preferences for equity or efficiency, as classically assumed, but by a generalized morality preference for “doing the right thing”. Here we argue that the key experiments reported in Capraro and Rand (2018) comprise prominent methodological confounds and open questions that bear on influential psychological theory. Specifically, their design confounds: (i) preferences for efficiency with self-interest; and (ii) preferences for action with preferences for morality. Furthermore, their design fails to dissociate the preference to do “good” from the preference to avoid doing “bad”. We thus designed and conducted a preregistered, refined and extended test of the morality preference hypothesis (N=801). Consistent with this hypothesis and the results of Capraro and Rand (2018), our findings indicate that prosocial behavior in anonymous, one-shot interactions is driven by a preference for doing the morally right thing. Inconsistent with influential psychological theory, however, our results suggest the preference to do “good” is as potent as the preference to avoid doing “bad” in prosocial choice.

The preprint is here.

Friday, June 8, 2018

The pros and cons of having sex with robots

Karen Turner
www.vox.com
Originally posted January 18, 2018

Here is an excerpt:

Karen Turner: Where does sex robot technology stand right now?

Neil McArthur:

When people have this idea of a sex robot, they think it’s going to look like a human being, it’s gonna walk around and say seductive things and so on. I think that’s actually the slowest-developing part of this whole nexus of sexual technology. It will come — we are going to have realistic sex robots. But there are a few technical hurdles to creating humanoid robots that are proving fairly stubborn. Making them walk is one of them. And if you use Siri or any of those others, you know that AI is proving sort of stubbornly resistant to becoming realistic.

But I think that when you look more broadly at what’s happening with sexual technology, virtual reality in general has just taken off. And it’s being used in conjunction with something called teledildonics, which is kind of an odd term. But all it means is actual devices that you hook up to yourself in various ways that sync with things that you see onscreen. It’s truly amazing what’s going on.

(cut)

When you look at the ethical or philosophical considerations, — I think there’s two strands. One is the concerns people have, and two, which I think maybe doesn’t get as much attention, in the media at least, is the potential advantages.

The concerns have to do with the psychological impact. As you saw with those Apple shareholders [who asked Apple to help protect children from digital addiction], we’re seeing a lot of concern about the impact that technology is having on people’s lives right now. Many people feel that anytime you’re dealing with sexual technology, those sorts of negative impacts really become intensified — specifically, social isolation, people cutting themselves off from the world.

The article is here.

The Ethics of Medicaid’s Work Requirements and Other Personal Responsibility Policies

Harald Schmidt and Allison K. Hoffman
JAMA. Published online May 7, 2018. doi:10.1001/jama.2018.3384

Here are two excerpts:

CMS emphasizes health improvement as the primary rationale, but the agency and interested states also favor work requirements for their potential to limit enrollment and spending and out of an ideological belief that everyone “do their part.” For example, an executive order by Kentucky’s Governor Matt Bevin announced that the state’s entire Medicaid expansion would be unaffordable if the waiver were not implemented, threatening to end expansion if courts strike down “one or more” program elements. Correspondingly, several nonexpansion states have signaled that the option of introducing work requirements might make them reconsider expansion—potentially covering more people but arguably in a way inconsistent with Medicaid’s broader objectives.

Work requirements have attracted the most attention but are just one of many policies CMS has encouraged as part of apparent attempts to promote personal responsibility in Medicaid. Other initiatives tie levels of benefits to confirming eligibility annually, paying premiums on time, meeting wellness program criteria such as completing health risk assessments, or not using the emergency department (ED) for nonemergency care.

(cut)

It is troubling that these policies could result in some portion of previously eligible individuals being denied necessary medical care because of unduly demanding requirements. Moreover, even if reduced enrollment were to decrease Medicaid costs, it might not reduce medical spending overall. Laws including the Emergency Medical Treatment and Labor Act still require stabilization of emergency medical conditions, entailing more expensive and less effective care.

The article is here.

Thursday, June 7, 2018

Embracing the robot

John Danaher
aeon.co
Originally posted March 19, 2018

Here is an excerpt:

Contrary to the critics, I believe our popular discourse about robotic relationships has become too dark and dystopian. We overstate the negatives and overlook the ways in which relationships with robots could complement and enhance existing human relationships.

In Blade Runner 2049, the true significance of K’s relationship with Joi is ambiguous. It seems that they really care for each other, but this could be an illusion. She is, after all, programmed to serve his needs. The relationship is an inherently asymmetrical one. He owns and controls her; she would not survive without his good will. Furthermore, there is a third-party lurking in the background: she has been designed and created by a corporation, which no doubt records the data from her interactions, and updates her software from time to time.

This is a far cry from the philosophical ideal of love. Philosophers emphasise the need for mutual commitment in any meaningful relationship. It’s not enough for you to feel a strong, emotional attachment to another; they have to feel a similar attachment to you. Robots might be able to perform love, saying and doing all the right things, but performance is insufficient.

The information is here.

Protecting confidentiality in genomic studies


MIT Press Release
Originally released May 7, 2018

Genome-wide association studies, which look for links between particular genetic variants and incidence of disease, are the basis of much modern biomedical research.

But databases of genomic information pose privacy risks. From people’s raw genomic data, it may be possible to infer their surnames and perhaps even the shapes of their faces. Many people are reluctant to contribute their genomic data to biomedical research projects, and an organization hosting a large repository of genomic data might conduct a months-long review before deciding whether to grant a researcher’s request for access.

In a paper published in Nature Biotechnology (https://doi.org/10.1038/nbt.4108), researchers from MIT and Stanford University present a new system for protecting the privacy of people who contribute their genomic data to large-scale biomedical studies. Where earlier cryptographic methods were so computationally intensive that they became prohibitively time consuming for more than a few thousand genomes, the new system promises efficient privacy protection for studies conducted over as many as a million genomes.

The release is here.

Wednesday, June 6, 2018

The LAPD’s Terrifying Policing Algorithm: Yes It’s Basically ‘Minority Report’

Dan Robitzski
Futurism.com
Originally posted May 11, 2018

The Los Angeles Police Department was recently forced to release documents about their predictive policing and surveillance algorithms, thanks to a lawsuit from the Stop LAPD Spying Coalition (which turned the documents over to In Justice Today). And what do you think the documents have to say?

If you guessed “evidence that policing algorithms, which require officers to keep a checklist of (and keep an eye on) 12 people deemed most likely to commit a crime, are continuing to propagate a vicious cycle of disproportionately high arrests of black Angelinos, as well as other racial minorities,” you guessed correctly.

Algorithms, no matter how sophisticated, are only as good as the information that’s provided to them. So when you feed an AI data from a city where there’s a problem of demonstrably, mathematically racist over-policing of neighborhoods with concentrations of people of color, and then have it tell you who the police should be monitoring, the result will only be as great as the process. And the process? Not so great!

The article is here.