Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, September 9, 2017

Will Technology Help Us Transcend the Human Condition?

Michael Hauskeller & Kyle McNease

Transcendence used to be the end of a spiritual quest and endeavour. Not anymore. Today we are more likely to believe that if anything can help us transcend the human condition it is not God or some kind of religious communion, but science and technology. Confidence is high that, if we do things right, and boldly and without fear embrace the new opportunities that technological progress grants us, we will soon be able to accomplish things that no human has ever done, or even imagined doing, before. With luck, we will be unimaginably smart and powerful, and virtually immortal, all thanks to a development that seems unstoppable and that has already surpassed all reasonable expectations.

Once upon a time, not so long ago, we used maps and atlases to find our way around. Occasionally we even had to stop and ask someone not named Siri or Cortana if we were indeed on the correct route. Today, our cars are navigated by satellites that triangulate our location in real time while circling the earth at thousands of miles per hour, and self-driving cars for everyone are just around the corner. Soon we may not even need cars anymore. Why go somewhere if technology can bring the world to us? Already we are in a position to do most of what we have to or want to do from home: get an education, work, do our shopping, our banking, our communication, all thanks to the internet, which 30 years ago did not exist and is now, to many of us, indispensable. Those who are coming of age today find it difficult to imagine a world without it. Currently, there are over 3.2 billion people connected to the World Wide Web, 2 billion of which live in developing countries. Most of them connect to the Web via increasingly versatile and powerful mobile devices few people would have thought possible a couple of generations ago. Soon we may be able to dispense even with mobile devices and do all of it in our bio-upgraded heads. In terms of the technology we are using every day without a second thought, the world has changed dramatically, and it continues to do so. Computation is now nearly ubiquitous, people seem constantly attached to their cellular phones, iPads, and laptops, enthusiastically endorsing their own progressive cyborgization. And connectivity does not stop at the level of human beings: even our household objects and devices are connected to the internet and communicate with each other, using their own secret language and taking care of things largely without the need for human intervention and control. The world we have built for ourselves thrives on a steady diet of zeroes and ones that have now become our co-creators, continuing the world-building in often unexpected ways.

The paper is here.

Friday, September 8, 2017

Study questions why thousands with developmental disabilities are prescribed antipsychotics

Peter Goffin
The Toronto Star
Originally published August 23, 2017

Researchers with the Centre for Addiction and Mental Health and the Institute for Clinical Evaluative Sciences have called for “guidelines and training around antipsychotic prescribing and monitoring” for doctors, pharmacists and care home staff after finding that nearly 40 per cent of people with developmental disabilities were prescribed antipsychotic drugs at some point over a six-year period.

One-third of the patients prescribed antipsychotics had no documented diagnosis of mental illness, according to the study, which tracked more than 51,000 people with developmental disabilities who are eligible for provincial drug benefits.

“We don’t know, with the data, why this one person was prescribed or this (other) person was prescribed so we’re trying to almost guess at why,” said psychologist Yona Lunsky, lead author of the study.

“It could be behaviour, aggression, self-injury, agitation.”

For people with developmental disabilities who live in group homes, the rate of antipsychotic prescriptions was even higher.

About 56 percent of developmentally disabled group home residents were prescribed antipsychotics. Of those, around 43 percent had no documented mental health issues.

The article is here.

Errors in the 2017 APA Clinical Practice Guideline for the Treatment of PTSD: What the Data Actually Says

Dominguez, S. and Lee, C.
Front. Psychol., 22 August 2017

Abstract

The American Psychological Association (APA) Practice Guidelines for the Treatment of Posttraumatic Stress Disorder (PTSD) concluded that there was strong evidence for cognitive behavioral therapy (CBT), cognitive processing therapy (CPT), cognitive therapy (CT), and exposure therapy yet weak evidence for eye movement desensitization and reprocessing (EMDR). This is despite the findings from an associated systematic review which concluded that EMDR leads to loss of PTSD diagnosis and symptom reduction. Depression symptoms were also found to improve more with EMDR than control conditions. In that review, EMDR was marked down on strength of evidence (SOE) for symptom reduction for PTSD. However, there were several problems with the conclusions of that review. Firstly, in assessing the evidence in one of the studies, the reviewers chose an incorrect measure that skewed the data. We recalculated a meta-analysis with a more appropriate measure and found the SOE improved. The resulting effect size for EMDR on PTSD symptom reduction compared to a control condition was large for studies that meet the APA inclusion criteria (SMD = 1.28) and the heterogeneity was low (I2 = 43%). Secondly, even if the original measure was chosen, we highlight inconsistencies with the way SOE was assessed for EMDR, CT, and CPT. Thirdly, we highlight two papers that were omitted from the analysis. One of these was omitted without any apparent reason. It found EMDR superior to a placebo control. The other study was published in 2015 and should have been part of APA guidelines since they were published in 2017. The inclusion of either study would have resulted in an improvement in SOE. Including both studies results in standard mean difference and confidence intervals that were better for EMDR than for CPT or CT. Therefore, the SOE should have been rated as moderate and EMDR assessed as at least equivalent to these CBT approaches in the APA guidelines. This would bring the APA guidelines in line with other recent practice guidelines from other countries. Less critical but also important, were several inaccuracies in assessing the risk of bias and the failure to consider studies supporting strong gains of EMDR at follow-up.

The article is here.

Thursday, September 7, 2017

Harm to self outweighs benefit to others in moral decision making

Lukas J. Volz, B. Locke Welborn, Matthias S. Gobel, Michael S. Gazzaniga, and Scott T. Grafton
PNAS 2017 ; published ahead of print July 10, 2017

Abstract

How we make decisions that have direct consequences for ourselves and others forms the moral foundation of our society. Whereas economic theory contends that humans aim at maximizing their own gains, recent seminal psychological work suggests that our behavior is instead hyperaltruistic: We are more willing to sacrifice gains to spare others from harm than to spare ourselves from harm. To investigate how such egoistic and hyperaltruistic tendencies influence moral decision making, we investigated trade-off decisions combining monetary rewards and painful electric shocks, administered to the participants themselves or an anonymous other. Whereas we replicated the notion of hyperaltruism (i.e., the willingness to forego reward to spare others from harm), we observed strongly egoistic tendencies in participants’ unwillingness to harm themselves for others’ benefit. The moral principle guiding intersubject trade-off decision making observed in our study is best described as egoistically biased altruism, with important implications for our understanding of economic and social interactions in our society.

Significance

Principles guiding decisions that affect both ourselves and others are of prominent importance for human societies. Previous accounts in economics and psychological science have often described decision making as either categorically egoistic or altruistic. Instead, the present work shows that genuine altruism is embedded in context-specific egoistic bias. Participants were willing to both forgo monetary reward to spare the other from painful electric shocks and also to suffer painful electric shocks to secure monetary reward for the other. However, across all trials and conditions, participants accrued more reward and less harm for the self than for the other person. These results characterize human decision makers as egoistically biased altruists, with important implications for psychology, economics, and public policy.

The article is here.

Are morally good actions ever free?

Cory J. Clark, Adam Shniderman, Jamie Luguri, Roy Baumeister, and Peter Ditto
SSRN Electronic Journal, August 2017

Abstract

A large body of work has demonstrated that people ascribe more responsibility to morally bad actions than both morally good and morally neutral ones, creating the impression that people do not attribute responsibility to morally good actions. The present work demonstrates that this is not so: People attributed more free will to morally good actions than morally neutral ones (Studies 1a-1b). Studies 2a-2b distinguished the underlying motives for ascribing responsibility to morally good and bad actions. Free will ascriptions for morally bad actions were driven predominantly by affective punitive responses. Free will judgments for morally good actions were similarly driven by affective reward responses, but also less affectively-charged and more pragmatic considerations (the perceived utility of reward, normativity of the action, and willpower required to perform the action). Responsibility ascriptions to morally good actions may be more carefully considered, leading to generally weaker, but more contextually-sensitive free will judgments.

The research is here.

Wednesday, September 6, 2017

The importance of building ethics into artificial intelligence

Kriti Sharma
Mashable
Originally published August 18, 2017

Here is an excerpt:

Humans possess inherent social, economic and cultural biases. It’s unfortunately core to social fabrics around the world. Therefore, AI offers a chance for the business community to eliminate such biases from their global operations.

The onus is on the tech community to build technology that utilizes data from relevant, trusted sources to embrace a diversity of culture, knowledge, opinions, skills and interactions.

Indeed, AI operating in the business world today performs repetitive tasks well, learns on the job and even incorporates human social norms into its work. However, AI also spends a significant amount of time scouring the web and its own conversational history for additional context that will inform future interactions with human counterparts.

This prevalence of well-trodden data sets and partial information on the internet presents a challenge and an opportunity for AI developers. When built with responsible business and social practices in mind, AI technology has the potential to consistently – and ethically – deliver products and services to people who need them. And do so without the omnipresent human threat of bias.

Ultimately, we need to create innately diverse AI. As an industry-focused tech community, we must develop effective mechanisms to filter out biases, as well as any negative sentiment in the data that AI learns from to ensure the technology does not perpetuate stereotypes. Unless we build AI using diverse teams, datasets and design, we risk repeating the fundamental inequality of previous industrial revolutions.

The article is here.

The Nuremberg Code 70 Years Later

Jonathan D. Moreno, Ulf Schmidt, and Steve Joffe
JAMA. Published online August 17, 2017.

Seventy years ago, on August 20, 1947, the International Medical Tribunal in Nuremberg, Germany, delivered its verdict in the trial of 23 doctors and bureaucrats accused of war crimes and crimes against humanity for their roles in cruel and often lethal concentration camp medical experiments. As part of its judgment, the court articulated a 10-point set of rules for the conduct of human experiments that has come to be known as the Nuremberg Code. Among other requirements, the code called for the “voluntary consent” of the human research subject, an assessment of risks and benefits, and assurances of competent investigators. These concepts have become an important reference point for the ethical conduct of medical research. Yet, there has in the past been considerable debate among scholars about the code’s authorship, scope, and legal standing in both civilian and military science. Nonetheless, the Nuremberg Code has undoubtedly been a milestone in the history of biomedical research ethics.1- 3

Writings on medical ethics, laws, and regulations in a number of jurisdictions and countries, including a detailed and sophisticated set of guidelines from the Reich Ministry of the Interior in 1931, set the stage for the code. The same focus on voluntariness and risk that characterizes the code also suffuses these guidelines. What distinguishes the code is its context. As lead prosecutor Telford Taylor emphasized, although the Doctors’ Trial was at its heart a murder trial, it clearly implicated the ethical practices of medical experimenters and, by extension, the medical profession’s relationship to the state understood as an organized community living under a particular political structure. The embrace of Nazi ideology by German physicians, and the subsequent participation of some of their most distinguished leaders in the camp experiments, demonstrates the importance of professional independence from and resistance to the ideological and geopolitical ambitions of the authoritarian state.

The article is here.

Tuesday, September 5, 2017

Ethical behaviour of physicians and psychologists: similarities and differences

Ferencz Kaddari M, Koslowsky M, Weingarten MA
Journal of Medical Ethics Published Online First: 18 August 2017.

Abstract

Objective 

To compare the coping patterns of physicians and clinical psychologists when confronted with clinical ethical dilemmas and to explore consistency across different dilemmas.

Population 88 clinical psychologists and 149 family physicians in Israel.

Method 

Six dilemmas representing different ethical domains were selected from the literature. Vignettes were composed for each dilemma, and seven possible behavioural responses for each were proposed, scaled from most to least ethical. The vignettes were presented to both family physicians and clinical psychologists.

Results 

Psychologists’ aggregated mean ethical intention score, as compared with the physicians, was found to be significantly higher (F(6, 232)=22.44, p<0.001, η2=0.37). Psychologists showed higher ethical intent for two dilemmas: issues of payment (they would continue treating a non-paying patient while physicians would not) and dual relationships (they would avoid treating the son of a colleague). In the other four vignettes, psychologists and physicians responded in much the same way. The highest ethical intent scores for both psychologists and physicians were for confidentiality and a colleague's inappropriate practice due to personal problems.

Conclusions 

Responses to the dilemmas by physicians and psychologists can be categorised into two groups: (1) similar behaviours on the part of both professions when confronting dilemmas concerning confidentiality, inappropriate practice due to personal problems, improper professional conduct and academic issues and (2) different behaviours when confronting either payment issues or dual relationships.

The research is here.

Monday, September 4, 2017

Teaching A.I. Systems to Behave Themselves

Cade Metz
The New York Times
Originally published August 13, 2017

Here is an excerpt:

Many specialists in the A.I. field believe a technique called reinforcement learning — a way for machines to learn specific tasks through extreme trial and error — could be a primary path to artificial intelligence. Researchers specify a particular reward the machine should strive for, and as it navigates a task at random, the machine keeps close track of what brings the reward and what doesn’t. When OpenAI trained its bot to play Coast Runners, the reward was more points.

This video game training has real-world implications.

If a machine can learn to navigate a racing game like Grand Theft Auto, researchers believe, it can learn to drive a real car. If it can learn to use a web browser and other common software apps, it can learn to understand natural language and maybe even carry on a conversation. At places like Google and the University of California, Berkeley, robots have already used the technique to learn simple tasks like picking things up or opening a door.

All this is why Mr. Amodei and Mr. Christiano are working to build reinforcement learning algorithms that accept human guidance along the way. This can ensure systems don’t stray from the task at hand.

Together with others at the London-based DeepMind, a lab owned by Google, the two OpenAI researchers recently published some of their research in this area. Spanning two of the world’s top A.I. labs — and two that hadn’t really worked together in the past — these algorithms are considered a notable step forward in A.I. safety research.

The article is here.