Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Legal Responsibility. Show all posts
Showing posts with label Legal Responsibility. Show all posts

Thursday, February 28, 2019

Should Watson Be Consulted for a Second Opinion?

David Luxton
AMA J Ethics. 2019;21(2):E131-137.
doi: 10.1001/amajethics.2019.131.

Abstract

This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson™ for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develop treatment recommendations are also offered.

Here is an excerpt:

Understanding Watson’s Limitations

There are precautions that should be taken into consideration before consulting Watson. First, it’s important for physicians such as Dr O to understand the technical challenges of accessing quality data that the system needs to analyze in order to derive recommendations. Idiosyncrasies in patient health care record systems is one culprit, causing missing or incomplete data. If some of the data that is available to Watson is inaccurate, then it could result in diagnosis and treatment recommendations that are flawed or at least inconsistent. An advantage of using a system such as Watson, however, is that it might be able to identify inconsistencies (such as those caused by human input error) that a human might otherwise overlook. Indeed, a primary benefit of systems such as Watson is that they can discover patterns that not even human experts might be aware of, and they can do so in an automated way. This automation has the potential to reduce uncertainty and improve patient outcomes.

Sunday, December 30, 2018

AI thinks like a corporation—and that’s worrying

Jonnie Penn
The Economist
Originally posted November 26, 2018

Here is an excerpt:

Perhaps as a result of this misguided impression, public debates continue today about what value, if any, the social sciences could bring to artificial-intelligence research. In Simon’s view, AI itself was born in social science.

David Runciman, a political scientist at the University of Cambridge, has argued that to understand AI, we must first understand how it operates within the capitalist system in which it is embedded. “Corporations are another form of artificial thinking-machine in that they are designed to be capable of taking decisions for themselves,” he explains.

“Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years,” says Mr Runciman. The worry is, these are systems we “never really learned how to control.”

After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.

Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc. It may be, given the costly labour required to identify and address these harms, that something akin to “ethics as a service” will emerge as a new cottage industry. Ms O’Neil, for example, now runs her own service that audits algorithms.

The info is here.

Thursday, April 20, 2017

Victims, vectors and villains: are those who opt out of vaccination morally responsible for the deaths of others?

Euzebiusz Jamrozik, Toby Handfield, Michael J Selgelid
Journal of Medical Ethics 2016;42:762-768.

Abstract

Mass vaccination has been a successful public health strategy for many contagious diseases. The immunity of the vaccinated also protects others who cannot be safely or effectively vaccinated—including infants and the immunosuppressed. When vaccination rates fall, diseases like measles can rapidly resurge in a population. Those who cannot be vaccinated for medical reasons are at the highest risk of severe disease and death. They thus may bear the burden of others' freedom to opt out of vaccination. It is often asked whether it is legitimate for states to adopt and enforce mandatory universal vaccination. Yet this neglects a related question: are those who opt out, where it is permitted, morally responsible when others are harmed or die as a result of their decision? In this article, we argue that individuals who opt out of vaccination are morally responsible for resultant harms to others. Using measles as our main example, we demonstrate the ways in which opting out of vaccination can result in a significant risk of harm and death to others, especially infants and the immunosuppressed. We argue that imposing these risks without good justification is blameworthy and examine ways of reaching a coherent understanding of individual moral responsibility for harms in the context of the collective action required for disease transmission. Finally, we consider several objections to this view, provide counterarguments and suggest morally permissible alternatives to mandatory universal vaccination including controlled infection, self-imposed social isolation and financial penalties for refusal to vaccinate.

The article is here.

Tuesday, May 3, 2016

What Kind of Legal Rights Should Robots Have?

By Jessie Guy-Ryan
Atlas Obscura
Originally posted March 12, 2016

Here is an excerpt:

So, to summarize the above: robots can’t give performances, aren’t animate objects, but can take possession of items as extensions of their operators. The entire paper is full of interesting, sometimes contradictory, cases, and well worth reading. But the varying precedents established combined with judicial metaphors advancing the idea that robots inherently lack autonomy, may create difficulties as robots—and, inevitably, legal cases involving robots—become more and more common and these narrow decisions and definitions become less and less accurate.

“The mismatch between what a robot is and how courts are likely to think of robots will only grow in salience and import over the coming decade,” Calo writes. He emphasizes the importance of exploring existing case law and establishing new institutions and agencies to provide knowledge and information to help guide courts.

The article is here.

Friday, August 21, 2015

How medical students learn ethics: an online log of their learning experiences

Carolyn Johnston & Jonathan Mok
J Med Ethics doi:10.1136/medethics-2015-102716

Abstract

Medical students experience ethics learning in a wide variety of formats, delivered not just through the taught curriculum. An audit of ethics learning was carried out at a medical school through a secure website over one academic year to determine the quantity and range of medical ethics learning in the undergraduate curriculum and compare this with topics for teaching described by the Institute of Medical Ethics (IME) (2010) and the General Medical Council's (GMC) Tomorrow's Doctors (2009). The online audit captured the participants’ reflections on their learning experiences and the impact on their future practice. Results illustrate the opportunistic nature of ethics learning, especially in the clinical years, and highlight the reality of the hidden curriculum for medical students. Overall, the ethics learning was a helpful and positive experience for the participants and fulfils the GMC and IME curriculum requirements.

The entire article is here.

How do Medical Students Learn Ethics?

Guest Post by Carolyn Johnston
BMJ Blogs
Originally posted on August 3, 2015

How interested are medical students in learning ethics and law? I have met students who have a genuine interest in the issues, who are engaged in teaching sessions and may go on to intercalate in ethics and law. On the other hand some consider that ethics is “just common sense”. They want to know only the legal parameters within which they will go on to practice and do not want to be troubled with a discussion of ethical issues for which there may not be a “correct” answer.

Ethics and law is a core part of the undergraduate medical curriculum and so in order to engage students successfully I need to know whether my teaching materials are relevant, useful and interesting. In 2010 I ran a student selected component in which MBBS Year 2 students created materials for medical ethics and law topics for pre-clinical students which they considered were engaging and relevant, so that students might go further than learning merely to pass exams. One student, Marcus Sorensen, who had managed a design consultancy focusing on web design and development before starting his medical studies, came up with the idea of a website as a platform for ethics materials for King’s students and he created the website http://get-ethical.co.uk.

The entire article is here.

Saturday, July 11, 2015

Does Brain Difference Affect Legal and Moral Responsibility?

HMS Center for Bioethics
Published on May 12, 2015

Brains create behavior. Yet we hold people, not brains, morally and legally responsible for their actions. Under what conditions could -- or should -- brain disorder affect the ways in which we assign moral and legal responsibility to a person?

In this conversation among a neuroscientist who studies moral judgement, a forensic psychiatrist, and a law professor, we explore three cases that highlight the relationship between brain disorder, law-breaking, and norms relating to responsibility.

Each case raises challenging questions: Can we establish whether the brain disorder caused the law-breaking behavior? Even if we can, is the presence of brain disorder morally or legally excusing? All behavior is caused: Why should some causes be excusing, but not others? If brain disorder can cause unlawful behavior, can we infer the reverse -- that people who behave unlawfully have disordered brains? Check out this provocative discussion on the state of the art at the intersection of neuroethics, brain science, philosophy, and the law.

 

Panel:

Dr. Fiery Cushman, Ph.D., is an assistant professor in the Department of Psychology at Harvard University. From 2011-2014 he served as a post-doctoral fellow in moral psychology, funded by the Mind, Brain and Behavior Initiative at Harvard University.

Dr. Judith Edersheim, MD, JD, is the Co-Founder and Co-Director of the Center for Law, Brain and Behavior, an Assistant Clinical Professor of Psychiatry at Harvard Medical School, and an attending Psychiatrist in the Department of Psychiatry at Massachusetts General Hospital.

Amanda Pustilnik, JD, is the Senior Fellow in Law & Applied Neuroscience at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, a faculty member of the Center for Law, Brain, and Behavior at Massachusetts General Hospital, and an assistant professor of law at the University of Maryland School of Law.