Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, January 9, 2018

Drug Companies’ Liability for the Opioid Epidemic

Rebecca L. Haffajee and Michelle M. Mello
N Engl J Med 2017; 377:2301-2305
December 14, 2017
DOI: 10.1056/NEJMp1710756

Here is an excerpt:

Opioid products, they alleged, were defectively designed because companies failed to include safety mechanisms, such as an antagonist agent or tamper-resistant formulation. Manufacturers also purportedly failed to adequately warn about addiction risks on drug packaging and in promotional activities. Some claims alleged that opioid manufacturers deliberately withheld information about their products’ dangers, misrepresenting them as safer than alternatives.

These suits faced formidable barriers that persist today. As with other prescription drugs, persuading a jury that an opioid is defectively designed if the Food and Drug Administration approved it is challenging. Furthermore, in most states, a drug manufacturer’s duty to warn about risks is limited to issuing an adequate warning to prescribers, who are responsible for communicating with patients. Finally, juries may resist laying legal responsibility at the manufacturer’s feet when the prescriber’s decisions and the patient’s behavior contributed to the harm. Some individuals do not take opioids as prescribed or purchase them illegally. Companies may argue that such conduct precludes holding manufacturers liable, or at least should reduce damages awards.

One procedural strategy adopted in opioid litigation that can help overcome defenses based on users’ conduct is the class action suit, brought by a large group of similarly situated individuals. In such suits, the causal relationship between the companies’ business practices and the harm is assessed at the group level, with the focus on statistical associations between product use and injury. The use of class actions was instrumental in overcoming tobacco companies’ defenses based on smokers’ conduct. But early attempts to bring class actions against opioid manufacturers encountered procedural barriers. Because of different factual circumstances surrounding individuals’ opioid use and clinical conditions, judges often deemed proposed class members to lack sufficiently common claims.

The article is here.

Dangers of neglecting non-financial conflicts of interest in health and medicine

Wiersma M, Kerridge I, Lipworth W.
Journal of Medical Ethics 
Published Online First: 24 November 2017.
doi: 10.1136/medethics-2017-104530

Abstract

Non-financial interests, and the conflicts of interest that may result from them, are frequently overlooked in biomedicine. This is partly due to the complex and varied nature of these interests, and the limited evidence available regarding their prevalence and impact on biomedical research and clinical practice. We suggest that there are no meaningful conceptual distinctions, and few practical differences, between financial and non-financial conflicts of interest, and accordingly, that both require careful consideration. Further, a better understanding of the complexities of non-financial conflicts of interest, and their entanglement with financial conflicts of interest, may assist in the development of a more sophisticated approach to all forms of conflicts of interest.

The article is here.

Monday, January 8, 2018

Advocacy group raises concerns about psychological evaluations on hundreds of defendants

Keith L. Alexander
The Washington Post
Originally published December 14, 2017

A District employee who has conducted mental evaluations on hundreds of criminal defendants as a forensic psychologist has been removed from that role after concerns surfaced about her educational qualifications, according to city officials.

Officials with the District’s Department of Health said Reston N. Bell was not qualified to conduct the assessments without the help or review of a supervisor. The city said it had mistakenly granted Bell, who was hired in 2016, a license to practice psychology, but this month the license was downgraded to “psychology associate.”

Although Bell has a master’s degree in psychology and a doctorate in education, she does not have a PhD in psychology, which led to the downgrade.

The article is here.

Nudging, informed consent and bullshit

William Simkulet
Journal of Medical Ethics Published Online 
First: 18 November 2017. doi: 10.1136/medethics-2017-104480

Abstract

Some philosophers have argued that during the process of obtaining informed consent, physicians should try to nudge their patients towards consenting to the option the physician believes best, where a nudge is any influence that is expected to predictably alter a person’s behaviour without (substantively) restricting her options. Some proponents of nudging even argue that it is a necessary and unavoidable part of securing informed consent. Here I argue that nudging is incompatible with obtaining informed consent. I assume informed consent requires that a physician tells her patient the truth about her options and argue that nudging is incompatible with truth-telling. Instead, nudging satisfies Harry Frankfurt’s account of bullshit.

The article is here.

Sunday, January 7, 2018

Are human rights anything more than legal conventions?

John Tasioulas
aeon.co
Originally published April 11, 2017

We live in an age of human rights. The language of human rights has become ubiquitous, a lingua franca used for expressing the most basic demands of justice. Some are old demands, such as the prohibition of torture and slavery. Others are newer, such as claims to internet access or same-sex marriage. But what are human rights, and where do they come from? This question is made urgent by a disquieting thought. Perhaps people with clashing values and convictions can so easily appeal to ‘human rights’ only because, ultimately, they don’t agree on what they are talking about? Maybe the apparently widespread consensus on the significance of human rights depends on the emptiness of that very notion? If this is true, then talk of human rights is rhetorical window-dressing, masking deeper ethical and political divisions.

Philosophers have debated the nature of human rights since at least the 12th century, often under the name of ‘natural rights’. These natural rights were supposed to be possessed by everyone and discoverable with the aid of our ordinary powers of reason (our ‘natural reason’), as opposed to rights established by law or disclosed through divine revelation. Wherever there are philosophers, however, there is disagreement. Belief in human rights left open how we go about making the case for them – are they, for example, protections of human needs generally or only of freedom of choice? There were also disagreements about the correct list of human rights – should it include socio-economic rights, like the rights to health or work, in addition to civil and political rights, such as the rights to a fair trial and political participation?

The article is here.

Saturday, January 6, 2018

The Myth of Responsibility

Raoul Martinez
RSA.org
Originally posted December 7, 2017

Are we wholly responsible for our actions? We don’t choose our brains, our genetic inheritance, our circumstances, our milieu – so how much control do we really have over our lives? Philosopher Raoul Martinez argues that no one is truly blameworthy.  Our most visionary scientists, psychologists and philosophers have agreed that we have far less free will than we think, and yet most of society’s systems are structured around the opposite principle – that we are all on a level playing field, and we all get what we deserve.

4 minutes video is worth watching.....

Friday, January 5, 2018

Changing genetic privacy rules may adversely affect research participation

Hayley Peoples
Baylor College of Medicine Blogs
Originally posted May 26, 2017

Do you know your genetic information? Maybe you’ve taken a “23andMe” test because you were curious about your ancestry or health. Maybe it was part of a medical examination. Maybe, like me, you underwent testing and received results as part of a class in college.

Do you ever worry about what could happen if your information landed in the wrong hands?

If you do, you aren’t alone. We’ve previously written about legislation affecting genetic privacy and public resistance to global data sharing, and the dialog about growing genetic privacy concerns only continues.

Wired.com recently ran an interesting piece on the House Health Plan and its approach to pre-existing conditions. While much about how a final, Senate-approved Affordable Care Act repeal and replace plan will address pre-existing conditions is still speculation, it brings up an interesting question – with respect to genetic information, will changing rules about pre-existing conditions have a chilling effect on research participation?

The information is here.

Implementation of Moral Uncertainty in Intelligent Machines

Kyle Bogosian
Minds and Machines
December 2017, Volume 27, Issue 4, pp 591–608

Abstract

The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.

Introduction

Advances in artificial intelligence have led to research into methods by which sufficiently intelligent systems, generally referred to as artificial moral agents (AMAs), can be guaranteed to follow ethically defensible behavior. Successful implementation of moral reasoning may be critical for managing the proliferation of autonomous vehicles, workers, weapons, and other systems as they increase in intelligence and complexity.

Approaches towards moral decisionmaking generally fall into two camps, “top-down” and “bottom-up” approaches (Allen et al 2005). Top-down morality is the explicit implementation of decision rules into artificial agents. Schemes for top-down decision-making that have been proposed for intelligent machines include Kantian deontology (Arkoudas et al 2005) and preference utilitarianism (Oesterheld 2016). Bottom-up morality avoids reference to specific moral theories by developing systems that can implicitly learn to distinguish between moral and immoral behaviors, such as cognitive architectures designed to mimic human intuitions (Bello and Bringsjord 2013). There are also hybrid approaches which merge insights from the two frameworks, such as one given by Wiltshire (2015).

The article is here.

Thursday, January 4, 2018

Artificial Intelligence Seeks An Ethical Conscience

Tom Simonite
wired.com
Originally published December 7, 2017

Here is an excerpt:

Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.

Hanna Wallach, co-chair of NIPS, cofounder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals, and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups,” she says—meaning code that calls black people gorillas would be likely to reach the public. Wallach also points to behavioral research showing that diverse teams consider a broader range of ideas when solving problems.

Ultimately, AI researchers alone can’t and shouldn’t decide how society puts their ideas to use. “A lot of decisions about the future of this field cannot be made in the disciplines in which it began,” says Terah Lyons, executive director of Partnership on AI, a nonprofit launched last year by tech companies to mull the societal impacts of AI. (The organization held a board meeting on the sidelines of NIPS this week.) She says companies, civic-society groups, citizens, and governments all need to engage with the issue.

The article is here.