Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Liability. Show all posts
Showing posts with label Liability. Show all posts

Tuesday, February 20, 2024

Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

Mello, M. M., & Guha, N. (2024).
The New England journal of medicine, 390(3), 271–278. https://doi.org/10.1056/NEJMhle2308901

Optimism about the explosive potential of artificial intelligence (AI) to transform medicine is tempered by worry about what it may mean for the clinicians being "augmented." One question is especially problematic because it may chill adoption: when Al contributes to patient injury, who will be held responsible?

Some attorneys counsel health care organizations with dire warnings about liability1 and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things. Yet, older forms of clinical decision support provided important opportunities to prevent errors and malpractice claims. Given the slow progress in reducing diagnostic errors, not adopting new tools also has consequences and at some point may itself become malpractice. Liability uncertainty also affects Al developers' cost of capital and incentives to develop particular products, thereby influencing which Al innovations become available and at what price.

To help health care organizations and physicians weigh Al-related liability risk against the benefits of adoption, we examine the issues that courts have grappled with in cases involving software error and what makes them so challenging. Because the signals emerging from case law remain somewhat faint, we conducted further analysis of the aspects of Al tools that elevate or mitigate legal risk. Drawing on both analyses, we provide risk-management recommendations, focusing on the uses of Al in direct patient care with a "human in the loop" since the use of fully autonomous systems raises additional issues.

(cut)

The Awkward Adolescence of Software-Related Liability

Legal precedent regarding Al injuries is rare because Al models are new and few personal-injury claims result in written opinions. As this area of law matures, it will confront several challenges.

Challenges in Applying Tort Law Principles to Health Care Artificial Intelligence (AI).

Ordinarily, when a physician uses or recommends a product and an injury to the patient results, well-established rules help courts allocate liability among the physician, product maker, and patient. The liabilities of the physician and product maker are derived from different standards of care, but for both kinds of defendants, plaintiffs must show that the defendant owed them a duty, the defendant breached the applicable standard of care, and the breach caused their injury; plaintiffs must also rebut any suggestion that the injury was so unusual as to be outside the scope of liability.

The article is paywalled, which is not how this should work.

Tuesday, January 11, 2022

Are some cultures more mind-minded in their moral judgements than others?

Barrett HC, Saxe RR. (2021)
Phil. Trans. R. Soc. B 376: 20200288.

Abstract

Cross-cultural research on moral reasoning has brought to the fore the question of whether moral judgements always turn on inferences about the mental states of others. Formal legal systems for assigning blame and punishment typically make fine-grained distinctions about mental states, as illustrated by the concept of mens rea, and experimental studies in the USA and elsewhere suggest everyday moral judgements also make use of such distinctions. On the other hand, anthropologists have suggested that some societies have a morality that is disregarding of mental states, and have marshalled ethnographic and experimental evidence in support of this claim. Here, we argue against the claim that some societies are simply less ‘mind-minded’ than others about morality. In place of this cultural main effects hypothesis about the role of mindreading in morality, we propose a contextual variability view in which the role of mental states in moral judgement depends on the context and the reasons for judgement. On this view, which mental states are or are not relevant for a judgement is context-specific, and what appear to be cultural main effects are better explained by culture-by-context interactions.

From the Summing Up section

Our critique of CME theories, we think, is likely to apply to many domains, not just moral judgement. Dimensions of cultural difference such as the ‘collectivist/individualist’ dimension may capture some small main effects of cultural difference, but we suspect that collectivism/individualism is a parameter that can be flipped contextually within societies to a much greater degree than it varies as a main effect across societies. We may be collectivists within families, for example, but individualists at work. Similarly, we suggest that everywhere there are contexts in which one’s mental states may be deemed morally irrelevant and others where they are not. Such judgements vary not just across contexts, but across individuals and time.

What we argue against, then, is thinking of mindreading as a resource that is scarce in some places and plentiful in others. Instead, we should think about it as a resource that is available everywhere, and whose use in moral judgement depends on a multiplicity of factors, including social norms but also, importantly, the reasons for which people are making judgements.

Wednesday, February 26, 2020

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Thursday, February 28, 2019

Should Watson Be Consulted for a Second Opinion?

David Luxton
AMA J Ethics. 2019;21(2):E131-137.
doi: 10.1001/amajethics.2019.131.

Abstract

This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson™ for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develop treatment recommendations are also offered.

Here is an excerpt:

Understanding Watson’s Limitations

There are precautions that should be taken into consideration before consulting Watson. First, it’s important for physicians such as Dr O to understand the technical challenges of accessing quality data that the system needs to analyze in order to derive recommendations. Idiosyncrasies in patient health care record systems is one culprit, causing missing or incomplete data. If some of the data that is available to Watson is inaccurate, then it could result in diagnosis and treatment recommendations that are flawed or at least inconsistent. An advantage of using a system such as Watson, however, is that it might be able to identify inconsistencies (such as those caused by human input error) that a human might otherwise overlook. Indeed, a primary benefit of systems such as Watson is that they can discover patterns that not even human experts might be aware of, and they can do so in an automated way. This automation has the potential to reduce uncertainty and improve patient outcomes.

Sunday, February 18, 2018

Responsibility and Consciousness

Matt King and Peter Carruthers

1. Introduction

Intuitively, consciousness matters for responsibility. A lack of awareness generally provides the
basis for an excuse, or at least for blameworthiness to be mitigated. If you are aware that what
you are doing will unjustifiably harm someone, it seems you are more blameworthy for doing so
than if you harm them without awareness. There is thus a strong presumption that consciousness
is important for responsibility. The position we stake out below, however, is that consciousness,
while relevant to moral responsibility, isn’t necessary.

The background for our discussion is an emerging consensus in the cognitive sciences
that a significant portion, perhaps even a substantial majority, of our mental lives takes place
unconsciously. For example, routine and habitual actions are generally guided by the so-called
“dorsal stream” of the visual system, whose outputs are inaccessible to consciousness (Milner &
Goodale 1995; Goodale 2014). And there has been extensive investigation of the processes that
accompany conscious as opposed to unconscious forms of experience (Dehaene 2014). While
there is room for disagreement at the margins, there is little doubt that our actions are much more
influenced by unconscious factors than might intuitively seem to be the case. At a minimum,
therefore, theories of responsibility that ignore the role of unconscious factors supported by the
empirical data proceed at their own peril (King & Carruthers 2012). The crucial area of inquiry
for those interested in the relationship between consciousness and responsibility concerns the
relative strength of that relationship and the extent to which it should be impacted by findings in
the empirical sciences.

The paper is here.

Wednesday, September 27, 2017

New York’s Highest Court Rules Against Physician-Assisted Suicide

Jacob Gershman
The Wall Street Journal
Originally posted September 7, 2017

New York’s highest court on Thursday ruled that physician-assisted suicide isn’t a fundamental right, rejecting a legal effort by terminally ill patients to decriminalize doctor-assisted suicide through the courts.

The state Court of Appeals, though, said it wouldn’t stand in the way if New York’s legislature were to decide that assisted suicide could be “effectively regulated” and pass legislation allowing terminally ill and suffering patients to kill themselves.

Physician-assisted suicide is illegal in most of the country. But advocates who support loosening the laws have been making gains. Doctor-assisted dying has been legalized in several states, most recently in California and Colorado, the former by legislation and the latter by a ballot measure approved by voters in November. Oregon, Vermont and Washington have enacted similar “end-of-life” measures. Washington, D.C., also passed an “assisted-dying” law last year.

Montana’s highest court in 2009 ruled that physicians who provide “aid in dying” are shielded from liability.

No state court has recognized “aid in dying” as a fundamental right.

The article is here.

Tuesday, May 23, 2017

Psychologist contractors say they were following agency orders

Pamela MacLean
Bloomberg News
Originally posted May 5, 2017

A pair of U.S. psychologists accused of overseeing the torture of terrorism detainees more than a decade ago face reluctance from a federal judge to let them question the CIA’s deputy director to show they were only following orders.

The judge indicated at a hearing Friday that the psychologists should be able defend themselves in the 2015 lawsuit without compromising government secrecy around the exact role Gina Haspel played in the agency’s overseas interrogation program years before she was tapped to be second in command by the Trump administration.

The American Civil Liberties Union, which filed the case on behalf of three ex-prisoners, one of whom died in custody, is urging the judge not to let the psychologists’ lawyers question Haspel and a retired Central Intelligence Agency official. While the defendants want to demonstrate their actions were approved by the agency, the ACLU says that won’t shield them from liability.

The article is here.

Friday, March 17, 2017

Professional Liability for Forensic Activities: Liability Without a Treatment Relationship

Donna Vanderpool
Innov Clin Neurosci. 2016 Jul-Aug; 13(7-8): 41–44.

This ongoing column is dedicated to providing information to our readers on managing legal risks associated with medical practice. We invite questions from our readers. The answers are provided by PRMS, Inc. (www.prms.com), a manager of medical professional liability insurance programs with services that include risk management consultation, education and onsite risk management audits, and other resources to healthcare providers to help improve patient outcomes and reduce professional liability risk. The answers published in this column represent those of only one risk management consulting company. Other risk management consulting companies or insurance carriers may provide different advice, and readers should take this into consideration. The information in this column does not constitute legal advice. For legal advice, contact your personal attorney. Note: The information and recommendations in this article are applicable to physicians and other healthcare professionals so “clinician” is used to indicate all treatment team members.

Question:

In my mental health practice, I am doing more and more forensic activities, such as IMEs and expert testimony. Since I am not treating the evaluees, there should be no professional liability risk, right?

The answer and column is here.

Saturday, February 25, 2017

Sorry is Never Enough: The Effect of State Apology Laws on Medical Malpractice Liability Risk

Benjamin J. McMichaela, R. Lawrence Van Hornb, & W. Kip Viscusic

Abstract:
 
State apology laws offer a separate avenue from traditional damages-centric tort reforms to promote communication between physicians and patients and to address potential medical malpractice liability. These laws facilitate apologies from physicians by excluding statements of apology from malpractice trials. Using a unique dataset that includes all malpractice claims for 90% of physicians practicing in a single specialty across the country, this study examines whether apology laws limit malpractice risk. For physicians who do not regularly perform surgery, apology laws increase the probability of facing a lawsuit and increase the average payment made to resolve a claim. For surgeons, apology laws do not have a substantial effect on the probability of facing a claim or the average payment made to resolve a claim. Overall, the evidence suggests that apology laws do not effectively limit medical malpractice liability risk.

The article is here.

Wednesday, September 7, 2016

APA Signs Onto Amicus Brief Supporting Confidentiality

Aaron Levin
Psychiatric News
Originally published August 11, 2016

APA has signed on to an amicus curiae brief with the California Psychiatric Association and the California Association of Marriage and Family Therapists in a case before the California Supreme Court with important implications for patient confidentiality and clinicians’ liability.

APA is concerned that a ruling in favor of the plaintiff would change the existing California standard (the so-called Tarasoff rule) requiring action when “a patient has communicated to the psychotherapist a serious threat of physical violence against a reasonably identifiable victim or victims.”

The case, Rosen v. Regents of the UCLA, arose when Damon Thompson, a student treated by UCLA’s counseling service, attacked and stabbed a fellow student, Katherine Rosen.

Under California law, a therapist has a “duty to protect” a potential victim if the patient makes a reasonably identifiable threat to harm a specific person.

The entire article is here.

Monday, April 25, 2016

The Strict Liability Standard and Clinical Supervision

Paul D. Polychronis & Steven G. Brown
Professional Psychology: Research and Practice, Vol 47(2), Apr 2016, 139-146.

Abstract

Clinical supervision is essential to both the training of new psychologists and the viability of professional psychology. It is also a high-risk endeavor for clinical supervisors because of regulations in many states that impose a strict liability standard on supervisors for supervisees’ conduct. Applied in the context of tort law, the concept of strict liability makes supervisors responsible for supervisees’ actions without having to establish that a given supervisor was negligent or careless. Consequently, in jurisdictions where the strict liability standard is used, it is virtually inevitable that clinical supervisors will be named in civil suits over a supervisee’s actions regardless of whether a supervisor has been appropriately conscientious. In cases of supervisee misconduct, regulations in 27 of 51 jurisdictions (the 50 states plus the District of Columbia) generally hold clinical supervisors fully responsible for supervisees’ actions in a professional realm regardless of the nature of the supervisees’ misbehavior. Some examples are provided of language from these state regulations. The implications of this current reality are discussed. Altering the regulatory approach to clinical supervision is explored to reduce risk to clinical supervisors that is beyond their reasonable control. Recommendations for conducting clinical supervision and corresponding risk-management practices are suggested to assist clinicians in protecting themselves if practicing in a jurisdiction that uses the strict liability standard in regulations governing clinical supervision.

The article is here.

Tuesday, December 29, 2015

Is Anyone Competent to Regulate Artificial Intelligence?

By John Danaher
Philosophical Disquisitions
Posted November 21, 2015

Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.

The entire blog post is here.

Thursday, September 25, 2014

You Should Have a Say in Your Robot Car’s Code of Ethics

By Jason Millar
Wired
Originally posted September 2, 2014

Here are some excerpts:

Informed consent wasn’t always the standard of practice in healthcare. It used to be common for physicians to make important treatment decisions on behalf of patients, often actively deceiving them as part of a treatment plan.

(cut)

For starters, we could choose to consider a manufacturer’s failure to obtain informed consent from a user, in situations involving deep moral commitments, a kind of product defect. Just as a doctor would be liable for failing to seek a patient’s informed consent before proceeding with a medical treatment, so too could we consider manufacturers liable for failing to reasonably respect user’s explicit moral preferences in the design of autonomous cars and other technologies. This approach would add considerably to the complexity of design. Then again, nobody said engineering robots was supposed to be simple.

The entire article is here.

Tuesday, May 13, 2014

Social media can cause problems for lawyers when it comes to ethics, professional responsibility

Bodies are trying to come up with guidelines for the legal profession when it comes to the use of social media

By Ed Silverstein
Inside Counsel
Originally published April 29, 2014

It is becoming increasingly confusing what lawyers, judges and courthouse employees can post on social media sites. For instance, can a judge “friend” someone who is an attorney on Facebook and then have the attorney appear before them in court?

Attorneys who post on sites like Facebook also have to worry about violating attorney-client confidentiality, disciplinary action, losing jobs, or engaging in the unauthorized or inadvertent practice of law, according to an article in the Touro Law Review. In addition, attorneys could “face sanctions for revealing misconduct or disparaging judges on social media sites,” the article adds.

The entire article is here.

Sunday, March 3, 2013

Essential Knowledge about Suicide Prevention




The New York Psychological Association
Published on Jan 31, 2013

"Essential Knowledge about Suicide Prevention-Evidence-Based Practices for Mental Health Professionals," sponsored by the NYS Psychological Association and the NYS OMH Suicide Prevention Initiative provides concepts and resources for clinicians as a starting point to build competency and preparedness for a suicide event, before it becomes a reality. Featuring Dr. Richard Juman, Dr. John Draper and Dr. Shane Owens, the video addresses issues including clinician anxiety about suicide, suicide and professional liability, and core competencies for suicide prevention in clinical practices, providing perspectives from both experts and clinicians.

NAASP: Clinical Care & Intervention Task Force Report

Friday, October 21, 2011

Third-party cases pose liability risks to doctors

By Alicia Gallgos
amdnews.com staff

The Utah Supreme Court is reviewing whether the children of a patient can sue their father's physician for medication mismanagement after the patient shot his wife to death. In a similar case, the Supreme Court of Georgia has ruled that a psychiatrist can be sued for medication negligence after a patient fatally attacked his mother.

The cases raise concerns about doctors' potential liability for criminal actions committed by their patients and what duty, if any, physicians owe to nonpatients. Experts say the cases remind doctors to take note of circumstances that could increase their liability risk to third parties.

In the Georgia case, the father of Victor Bruscato filed a lawsuit on behalf of Victor against psychiatrist Derek O'Brien, MD. He alleged that the doctor's discontinuation of Bruscato's two antipsychotic medications aggravated his son's violent tendencies. After the drugs were stopped, Bruscato, a mentally ill patient with a history of violence, stabbed his mother to death.

Dr. O'Brien had ordered two of Bruscato's medications stopped for six weeks to rule out the possibility that Bruscato was developing neuroleptic malignancy syndrome, according to court documents. A trial court dismissed the case in favor of Dr. O'Brien, ruling that public policy does not allow the Bruscatos to benefit from any wrongdoing, namely the killing of Lillian Bruscato. The appeals court reversed the decision.

In its Sept. 12 opinion, the Supreme Court affirmed, allowing the lawsuit to proceed. Though public policy prevents profiting from a wrongdoing in court, an exception exists if a mentally ill patient isn't aware of what he is doing, the court said. Bruscato was never found guilty of a crime; instead, he was ruled incompetent to stand trial and committed to a state mental hospital.

The rest of the story can be found here.