Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Uncertainty. Show all posts
Showing posts with label Moral Uncertainty. Show all posts

Sunday, October 8, 2023

Moral Uncertainty and Our Relationships with Unknown Minds

Danaher, J. (2023). 
Cambridge Quarterly of Healthcare Ethics, 
32(4), 482-495.
doi:10.1017/S0963180123000191

Abstract

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.


My take: 

John Danaher explores the ethical challenges of interacting with entities whose moral status is uncertain, such as artificial beings, animals, and patients with locked-in syndrome. Danaher argues that this is best understood as an ethical-epistemic challenge, and that we need to develop meta-moral decision rules that allow us to minimize the risks of moral wrongdoing or improve the choiceworthiness of our actions.

One particular argument that Danaher adopts is the "risk asymmetry argument," which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. In the context of human-AI relationships, Danaher argues that it is more prudent to err on the side of caution and treat AI systems as if they have moral standing, even if we are not sure whether they actually do. This is because the potential risks of mistreating AI systems, such as creating social unrest or sparking an arms race, are much greater than the potential risks of treating them too respectfully.

Danaher acknowledges that this approach may create some tension in our moral views, as it suggests that we should be skeptical about the basic moral status of AI systems, but more open to the possibility of meaningful relationships with them. However, he argues that this is the most sensible approach to take, given the ethical-epistemic challenges that we face.

Thursday, July 22, 2021

The Possibility of an Ongoing Moral Catastrophe

Williams, E.G. (2015).
Ethic Theory Moral Prac 18, 
971–982 (2015). 
https://doi.org/10.1007/s10677-015-9567-7

Abstract

This article gives two arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. First is an inductive argument: most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing, so ours probably is too. Second is a disjunctive argument: there are a large number of distinct ways in which our practices could turn out to be horribly wrong, so even if no particular hypothesized moral mistake strikes us as very likely, the disjunction of all such mistakes should receive significant credence. The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly.

Friday, January 5, 2018

Implementation of Moral Uncertainty in Intelligent Machines

Kyle Bogosian
Minds and Machines
December 2017, Volume 27, Issue 4, pp 591–608

Abstract

The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.

Introduction

Advances in artificial intelligence have led to research into methods by which sufficiently intelligent systems, generally referred to as artificial moral agents (AMAs), can be guaranteed to follow ethically defensible behavior. Successful implementation of moral reasoning may be critical for managing the proliferation of autonomous vehicles, workers, weapons, and other systems as they increase in intelligence and complexity.

Approaches towards moral decisionmaking generally fall into two camps, “top-down” and “bottom-up” approaches (Allen et al 2005). Top-down morality is the explicit implementation of decision rules into artificial agents. Schemes for top-down decision-making that have been proposed for intelligent machines include Kantian deontology (Arkoudas et al 2005) and preference utilitarianism (Oesterheld 2016). Bottom-up morality avoids reference to specific moral theories by developing systems that can implicitly learn to distinguish between moral and immoral behaviors, such as cognitive architectures designed to mimic human intuitions (Bello and Bringsjord 2013). There are also hybrid approaches which merge insights from the two frameworks, such as one given by Wiltshire (2015).

The article is here.

Tuesday, January 13, 2015

Is Applied Ethics Applicable Enough? Acting and Hedging under Moral Uncertainty

By Grace Boey
3 Quarks Daily
Originally published December 16, 2014

Here are two excerpts:

Lots has been written about moral decision-making under factual uncertainty. Michael Zimmerman, for example, has written an excellent book on how such ignorance impacts morality. The point of most ethical thought experiments, though, is to eliminate precisely this sort of uncertainty. Ethicists are interested in finding out things like whether, once we know all the facts of the situation, and all other things being equal, it's okay to engage in certain actions. If we're still not sure of the rightness or wrongness of such actions, or of underlying moral theories themselves, then we experience moral uncertainty.

(cut)

So, what's the best thing to do when we're faced with moral uncertainty? Unless one thinks that anything goes once uncertainty enters the picture, then doing nothing by default is not a good strategy. As the trolley case demonstrates, inaction often has major consequences. Failure to act also comes with moral ramifications...

The entire blog post is here.