Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Machine Ethics. Show all posts
Showing posts with label Machine Ethics. Show all posts

Sunday, October 9, 2022

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines 30, 195–218 (2020).
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Free will and Autonomy

Several AMA debaters have claimed that free will is necessary for being a moral agent (Himma 2009; Hellström 2012; Friedman and Kahn 1992). Others make a similar (and perhaps related) claim that autonomy is necessary (Lin et al. 2008; Schulzke 2013). In the AMA debate, some argue that artificial entities can never have free will (Bringsjord 1992; Shen 2011; Bringsjord 2007) while others, like James Moor (2006, 2009), are open to the possibility that future machines might acquire free will.Footnote15 Others (Powers 2006; Tonkens 2009) have proposed that the plausibility of a free will condition on moral agency may vary depending on what type of normative ethical theory is assumed, but they have not developed this idea further.

Despite appealing to the concept of free will, this portion of the AMA debate does not engage with key problems in the free will literature, such as the debate about compatibilism and incompatibilism (O’Connor 2016). Those in the AMA debate assume the existence of free will among humans, and ask whether artificial entities can satisfy a source control condition (McKenna et al. 2015). That is, the question is whether or not such entities can be the origins of their actions in a way that allows them to control what they do in the sense assumed of human moral agents.

An exception to this framing of the free will topic in the AMA debate occurs when Johnson writes that ‘… the non-deterministic character of human behavior makes it somewhat mysterious, but it is only because of this mysterious, non-deterministic aspect of moral agency that morality and accountability are coherent’ (Johnson 2006 p. 200). This is a line of reasoning that seems to assume an incompatibilist and libertarian sense of free will, assuming both that it is needed for moral agency and that humans do possess it. This, of course, makes the notion of human moral agents vulnerable to standard objections in the general free will debate (Shaw et al. 2019). Additionally, we note that Johnson’s idea about the presence of a ‘mysterious aspect’ of human moral agents might allow for AMA in the same way as Dreyfus and Hubert’s reference to the subconscious: artificial entities may be built to incorporate this aspect.

The question of sourcehood in the AMA debate connects to the independence argument: For instance, when it is claimed that machines are created for a purpose and therefore are nothing more than advanced tools (Powers 2006; Bryson 2010; Gladden 2016) or prosthetics (Johnson and Miller 2008), this is thought to imply that machines can never be the true or genuine source of their own actions. This argument questions whether the independence required for moral agency (by both functionalists and standardists) can be found in a machine. If a machine’s repertoire of behaviors and responses is the result of elaborate design then it is not independent, the argument goes. Floridi and Sanders question this proposal by referring to the complexity of ‘human programming’, such as genes and arranged environmental factors (e.g. education). 

Monday, August 10, 2020

An approach for combining ethical principles with public opinion to guide public policy

E. Awad and others.
Artificial Intelligence
Volume 287, October 2020, 103349

Abstract

We propose a framework for incorporating public opinion into policy making in situations where values are in conflict. This framework advocates creating vignettes representing value choices, eliciting the public's opinion on these choices, and using machine learning to extract principles that can serve as succinct statements of the policies implied by these choices and rules to guide the behavior of autonomous systems.

From the Discussion

In the general case, we would strongly recommend input from experts (including ethicists, legal scholars, policymakers among others). Still, two facts remain: (1) views on life and death are emotionally driven, so it’s hard for people to accept some authority figure telling them how they should behave; (2) Even from an ethical perspective, it’s not always clear which view is the correct one. In such cases, when policy experts cannot reach a consensus, they may use citizens’ preferences as a tie-breaker. Doing so, helps reach a conclusive decision, it promotes values of democracy, it increases public acceptance of this technology (especially when it provides much better safety), and it promotes their sense of involvement and citizenship.  On the other hand, a full dependence on public input would always have the possibility for tyranny of the majority, among other issues raised above. This is why our proposed method provides a suitable approach that combines the utilization of citizen’s input with the responsible oversight by experts.

In this paper, we propose a framework that can help resolve conflicting moral values. In so doing, we exploit two decades of research in the representation and abstraction of values from cases in the service of abstracting and representing the values expressed in crowd-sourced data to the end of informing public policy. As a results, the resolution of competing values is produced in two forms: one that can be implemented in autonomous systems to guide their behavior, and a human-readable representation (policy) of these rules. At the core of this framework, is the collection of data from the
public.

Wednesday, July 8, 2020

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines (2020). 
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Conclusion

We have argued that to be able to contribute to pressing practical problems, the debate on AMA should be redirected to address outright normative ethical questions. Specifically, the questions of how and to what extent artificial entities should be involved in human practices where we normally assume moral agency and responsibility. The reason for our proposal is the high degree of conceptual confusion and lack of practical usefulness of the traditional AMA debate. And this reason seems especially strong in light of the current fast development and implementation of advanced, autonomous and self-evolving AI and robotic constructs.

Friday, January 5, 2018

Implementation of Moral Uncertainty in Intelligent Machines

Kyle Bogosian
Minds and Machines
December 2017, Volume 27, Issue 4, pp 591–608

Abstract

The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.

Introduction

Advances in artificial intelligence have led to research into methods by which sufficiently intelligent systems, generally referred to as artificial moral agents (AMAs), can be guaranteed to follow ethically defensible behavior. Successful implementation of moral reasoning may be critical for managing the proliferation of autonomous vehicles, workers, weapons, and other systems as they increase in intelligence and complexity.

Approaches towards moral decisionmaking generally fall into two camps, “top-down” and “bottom-up” approaches (Allen et al 2005). Top-down morality is the explicit implementation of decision rules into artificial agents. Schemes for top-down decision-making that have been proposed for intelligent machines include Kantian deontology (Arkoudas et al 2005) and preference utilitarianism (Oesterheld 2016). Bottom-up morality avoids reference to specific moral theories by developing systems that can implicitly learn to distinguish between moral and immoral behaviors, such as cognitive architectures designed to mimic human intuitions (Bello and Bringsjord 2013). There are also hybrid approaches which merge insights from the two frameworks, such as one given by Wiltshire (2015).

The article is here.