Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, March 15, 2018

Apple’s Move to Share Health Care Records Is a Game-Changer

Aneesh Chopra and Safiq Rab
wired.com
Originally posted February 19, 2018

Here is an excerpt:

Naysayers point out the fact that Apple is currently displaying only a sliver of a consumer’s entire electronic health record. That is true, but it's largely on account of the limited information available via the open API standard. As with all standards efforts, the FHIR API will add more content, like scheduling slots and clinical notes, over time. Some of that work will be motivated by proposed federal government voluntary framework to expand the types of data that must be shared over time by certified systems, as noted in this draft approach out for public comment.

Imagine if Apple further opens up Apple Health so it no longer serves as the destination, but a conduit for a patient's longitudinal health record to a growing marketplace of applications that can help guide consumers through decisions to better manage their health.

Thankfully, the consumer data-sharing movement—placing the longitudinal health record in the hands of the patient and the applications they trust—is taking hold, albeit quietly. In just the past few weeks, a number of health systems that were initially slow to turn on the required APIs suddenly found the motivation to meet Apple's requirement.

The article is here.

Computing and Moral Responsibility

Noorman, Merel
The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.)

Traditionally philosophical discussions on moral responsibility have focused on the human components in moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today’s increasingly technological society, however, human activity cannot be properly understood without making reference to technological artifacts, which complicates the ascription of moral responsibility (Jonas 1984; Waelbers 2009). As we interact with and through these artifacts, they affect the decisions that we make and how we make them (Latour 1992). They persuade, facilitate and enable particular human cognitive processes, actions or attitudes, while constraining, discouraging and inhibiting others. For instance, internet search engines prioritize and present information in a particular order, thereby influencing what internet users get to see. As Verbeek points out, such technological artifacts are “active mediators” that “actively co-shape people’s being in the world: their perception and actions, experience and existence” (2006, p. 364). As active mediators, they change the character of human action and as a result it challenges conventional notions of moral responsibility (Jonas 1984; Johnson 2001).

Computing presents a particular case for understanding the role of technology in moral responsibility. As these technologies become a more integral part of daily activities, automate more decision-making processes and continue to transform the way people communicate and relate to each other, they further complicate the already problematic tasks of attributing moral responsibility. The growing pervasiveness of computer technologies in everyday life, the growing complexities of these technologies and the new possibilities that they provide raise new kinds of questions: who is responsible for the information published on the Internet? Who is responsible when a self-driving vehicle causes an accident? Who is accountable when electronic records are lost or when they contain errors? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?

The entry is here.

Wednesday, March 14, 2018

Oxfam scandal is not about morality, but abuse of power

Kerry Boyd Anderson
arabnews.com
Originally posted February 18, 2018

Here is an excerpt:

Two of these problems directly relate to the #metoo movement against sexual harassment and abuse. First, the Oxfam scandal is not about personal sexual immorality. It is about abuse of power and sexual exploitation. When these men entered a war zone or an area that had suffered a massive natural disaster, they were not dealing with women there on equal terms; they were in a position of power and relative wealth, and offered women in desperate circumstances money in exchange for sex. These women were part of the population the aid workers were supposed to be helping, so using them in this way constitutes a clear breach of trust. This is one of the #metoo movement’s key points — this type of behavior is not about personal morality, it is about abuse of power.

Another problem that the scandal highlights is the way that many organizations protect the men who are behaving badly. In the Oxfam case, the focus has been on one man in a leadership position: Roland van Hauwermeiren, who created an enabling environment and participated in the hiring of prostitutes. Van Hauwermeiren previously led a project team for the charity Merlin in Liberia, where a colleague reported that men on the team were hiring local women as prostitutes. After an internal investigation, he resigned. He later led Oxfam’s team in Chad, where similar accusations arose. Despite this, Oxfam put him in charge of a team in Haiti, where the behavior continued. Following an investigation, van Hauwermeiren resigned, but he then went on to work for Action Against Hunger in Bangladesh. 

Have some evangelicals embraced moral relativism?

Corey Fields
Baptist News Global
Originally posted February 16, 2018

Here is an excerpt:

The moral rot we’re seeing among white evangelicals has been hard to watch, and it did not start in 2016. Back in 2009, an article in the evangelical publication Christianity Today bemoaned a survey finding that 62 percent of white evangelicals support the use of torture. Despite a supposed pro-life stance, white evangelicals are also the most likely religious group to support war and the death penalty. Racism and sexual predation among elected officials are getting a pass if they deliver on policy. Charles Mathewes, a professor of religious studies at the University of Virginia, put it well: “For believers in a religion whose Scriptures teach compassion, we [white evangelicals] are a breathtakingly cruel bunch.”

Here’s a quote from a prominent evangelical author: “As it turns out, character does matter. You can’t run a family, let alone a country, without it. How foolish to believe that a person who lacks honesty and moral integrity is qualified to lead a nation and the world!” That was written by James Dobson of Focus on the Family. But he wasn’t talking about Donald Trump. He wrote that about Bill Clinton in 1998. Is this principle no longer in force, or does it only apply to Democrats?

As Robert P. Jones noted, the ends apparently justify the means. “White evangelicals have now fully embraced a consequentialist ethics that works backward from predetermined political ends, refashioning or even discarding principles as needed to achieve a desired outcome.” That’s moral relativism.

The article is here.

Tuesday, March 13, 2018

Cognitive Ability and Vulnerability to Fake News

David Z. Hambrick and Madeline Marquardt
Scientific American
Originally posted on February 6, 2018

“Fake news” is Donald Trump’s favorite catchphrase. Since the election, it has appeared in some 180 tweets by the President, decrying everything from accusations of sexual assault against him to the Russian collusion investigation to reports that he watches up to eight hours of television a day. Trump may just use “fake news” as a rhetorical device to discredit stories he doesn’t like, but there is evidence that real fake news is a serious problem. As one alarming example, an analysis by the internet media company Buzzfeed revealed that during the final three months of the 2016 U.S. presidential campaign, the 20 most popular false election stories generated around 1.3 million more Facebook engagements—shares, reactions, and comments—than did the 20 most popular legitimate stories. The most popular fake story was “Pope Francis Shocks World, Endorses Donald Trump for President.”

Fake news can distort people’s beliefs even after being debunked. For example, repeated over and over, a story such as the one about the Pope endorsing Trump can create a glow around a political candidate that persists long after the story is exposed as fake. A study recently published in the journal Intelligence suggests that some people may have an especially difficult time rejecting misinformation.

The article is here.

Doctors In Maine Say Halt In OxyContin Marketing Comes '20 Years Late'

Patty Wight
npr.org
Originally posted February 13, 2018

The maker of OxyContin, one of the most prescribed and aggressively marketed opioid painkillers, will no longer tout the drug or any other opioids to doctors.

The announcement, made Saturday, came as drugmaker Purdue Pharma faces lawsuits for deceptive marketing brought by cities and counties across the U.S., including several in Maine. The company said it's cutting its U.S. sales force by more than half.

Just how important are these steps against the backdrop of a raging opioid epidemic that took the lives of more than 300 Maine residents in 2016, and accounted for more than 42,000 deaths nationwide?

"They're 20 years late to the game," says Dr. Noah Nesin, a family physician and vice president of medical affairs at Penobscot Community Health Care.

Nesin says even after Purdue Pharma paid $600 million in fines about a decade ago for misleading doctors and regulators about the risks opioids posed for addiction and abuse, it continued marketing them.

The article is here.

Monday, March 12, 2018

Train PhD students to be thinkers not just specialists

Gundula Bosch
nature.com
Originally posted February 14, 2018

Under pressure to turn out productive lab members quickly, many PhD programmes in the biomedical sciences have shortened their courses, squeezing out opportunities for putting research into its wider context. Consequently, most PhD curricula are unlikely to nurture the big thinkers and creative problem-solvers that society needs.

That means students are taught every detail of a microbe’s life cycle but little about the life scientific. They need to be taught to recognize how errors can occur. Trainees should evaluate case studies derived from flawed real research, or use interdisciplinary detective games to find logical fallacies in the literature. Above all, students must be shown the scientific process as it is — with its limitations and potential pitfalls as well as its fun side, such as serendipitous discoveries and hilarious blunders.

This is exactly the gap that I am trying to fill at Johns Hopkins University in Baltimore, Maryland, where a new graduate science programme is entering its second year. Microbiologist Arturo Casadevall and I began pushing for reform in early 2015, citing the need to put the philosophy back into the doctorate of philosophy: that is, the ‘Ph’ back into the PhD.

The article is here.

The tech bias: why Silicon Valley needs social theory

Jan Bier
aeon.com
Originally posted February 14, 2018

Here is an excerpt:

That Google memo is an extreme example of an imbalance in how different ways of knowing are valued. Silicon Valley tech companies draw on innovative technical theory but have yet to really incorporate advances in social theory. The inattention to such knowledge becomes all too apparent when algorithms fail in their real-life applications – from automated soap-dispensers that fail to turn on when a user has dark brown skin, to the new iPhone X’s inability to distinguish among different Asian women.

Social theorists in fields such as sociology, geography, and science and technology studies have shown how race, gender and class biases inform technical design. So there’s irony in the fact that employees hold sexist and racist attitudes, yet ‘we are supposed to believe that these same employees are developing “neutral” or “objective” decision-making tools’, as the communications scholar Safiya Umoja Noble at the University of Southern California argues in her book Algorithms of Oppression (2018).

In many cases, what’s eroding the value of social knowledge is unintentional bias – on display when prominent advocates for equality in science and tech undervalue research in the social sciences. The physicist Neil DeGrasse Tyson, for example, has downplayed the link between sexism and under-representation in science. Apparently, he’s happy to ignore extensive research pointing out that the natural sciences’ male-dominated institutional cultures are a major cause of the attrition of female scientists at all stages of their careers.

The article is here.

Sunday, March 11, 2018

Cognitive Bias in Forensic Mental Health Assessment: Evaluator Beliefs About Its Nature and Scope

Zapf, P. A., Kukucka, J., Kassin, S. M., & Dror, I. E.
Psychology, Public Policy, & Law

Abstract

Decision-making of mental health professionals is influenced by irrelevant information (e.g., Murrie, Boccaccini, Guarnera, & Rufino, 2013). However, the extent to which mental health evaluators acknowledge the existence of bias, recognize it, and understand the need to guard against it, is unknown. To formally assess beliefs about the scope and nature of cognitive bias, we surveyed 1,099 mental health professionals who conduct forensic evaluations for the courts or other tribunals (and compared these results with a companion survey of 403 forensic examiners, reported in Kukucka, Kassin, Zapf, & Dror, 2017). Most evaluators expressed concern over cognitive bias but held an incorrect view that mere willpower can reduce bias. Evidence was also found for a bias blind spot (Pronin, Lin, & Ross, 2002), with more evaluators acknowledging bias in their peers’ judgments than in their own. Evaluators who had received training about bias were more likely to acknowledge cognitive bias as a cause for concern, whereas evaluators with more experience were less likely to acknowledge cognitive bias as a cause for concern in forensic evaluation as well as in their own judgments. Training efforts should highlight the bias blind spot and the fallibility of introspection or conscious effort as a means of reducing bias. In addition, policies and procedural guidance should be developed in regard to best cognitive practices in forensic evaluations.

Closing statements:

What is clear is that forensic evaluators appear to be aware of the issue of bias in general, but diminishing rates of perceived susceptibility to bias in one’s own judgments and the perception of higher rates of bias in the judgments of others as compared with oneself, underscore that we may not be the most objective evaluators of our own decisions. As with the forensic sciences, implementing procedures and strategies to minimize the impact of bias in forensic evaluation can serve to proactively mitigate against the intrusion of irrelevant information in forensic decision making. This is especially important given the courts’ heavy reliance on evaluators’ opinions (see Zapf, Hubbard, Cooper, Wheeles, & Ronan, 2004), the fact that judges and juries have little choice but to trust the expert’s self-assessment of bias (see Kassin et al., 2013), and the potential for biased opinions and conclusions to cross-contaminate other evidence or testimony (see Dror, Morgan, Rando, & Nakhaeizadeh, 2017). More research is necessary to determine the specific strategies to be used and the various recommended means of implementing those strategies across forensic evaluations, but the time appears to be ripe for further discussion and development of policies and guidelines to acknowledge and attempt to reduce the potential impact of bias in forensic evaluation.

The article is here.