Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Malfunction. Show all posts
Showing posts with label Malfunction. Show all posts

Thursday, January 7, 2021

How Might Artificial Intelligence Applications Impact Risk Management?

John Banja
AMA J Ethics. 2020;22(11):E945-951. 

Abstract

Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management. In any event, it seems that integration of AI models into health care operations will almost certainly introduce, if not new forms of risk, then a dramatically heightened magnitude of risk that will have to be managed.

AI Risks in Health Care

Artificial intelligence (AI) applications in health care have attracted enormous attention as well as immense public and private sector investment in the last few years.1 The anticipation is that AI technologies will dramatically alter—perhaps overhaul—health care practices and delivery. At the very least, hospitals and clinics will likely begin importing numerous AI models, especially “deep learning” varieties that draw on aggregate data, over the next decade.

A great deal of the ethics literature on AI has recently focused on the accuracy and fairness of algorithms, worries over privacy and confidentiality, “black box” decisional unexplainability, concerns over “big data” on which deep learning AI models depend, AI literacy, and the like. Although some of these risks, such as security breaches of medical records, have been around for some time, their materialization in AI applications will likely present large-scale privacy and confidentiality risks. AI models have already posed enormous challenges to hospitals and facilities by way of cyberattacks on protected health information, and they will introduce new ethical obligations for providers who might wish to share patient data or sell it to others. Because AI models are themselves dependent on hardware, software, algorithmic development and accuracy, implementation, data sharing and storage, continuous upgrading, and the like, risk management will find itself confronted with a new panoply of liability risks. On the one hand, risk management can choose to address these new risks by developing mitigation strategies. On the other hand, because these AI risks present a novel landscape of risk that might be quite unfamiliar, risk management might choose to leave certain of those challenges to others. This essay will discuss this “approach-avoidance” possibility in connection with 3 categories of risk—system malfunctions, privacy breaches, and consent to data repurposing—and conclude with some speculations on how those decisions might play out.

Friday, May 24, 2019

Holding Robots Responsible: The Elements of Machine Morality

Y. Bingman, A. Waytz, R Alterovitz, and K. Gray
Trends in Cognitive Science

Abstract


As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will—plus anthropomorphism and the robot’s capacity for harm. We also consider questions of robot rights and moral decision-making.

Here is an excerpt:

Philosophy, law, and modern cognitive science all reveal that judgments of human moral responsibility hinge on autonomy. This explains why children, who seem to have less autonomy than adults, are held less responsible for wrongdoing. Autonomy is also likely crucial in judgments of robot moral responsibility. The reason people ponder and debate the ethical implications of drones and self-driving cars (but not tractors or blenders) is because these machines can act autonomously.

Admittedly, today’s robots have limited autonomy, but it is an expressed goal of roboticists to develop fully autonomous robots—machine systems that can act without human input. As robots become more autonomous their potential for moral responsibility will only grow. Even as roboticists create robots with more “objective” autonomy, we note that “subjective” autonomy may be more important: work in cognitive science suggest that autonomy and moral responsibility are more matters of perception than objective truths.

The info can be downloaded here.