Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Machine. Show all posts
Showing posts with label Moral Machine. Show all posts

Monday, August 10, 2020

An approach for combining ethical principles with public opinion to guide public policy

E. Awad and others.
Artificial Intelligence
Volume 287, October 2020, 103349

Abstract

We propose a framework for incorporating public opinion into policy making in situations where values are in conflict. This framework advocates creating vignettes representing value choices, eliciting the public's opinion on these choices, and using machine learning to extract principles that can serve as succinct statements of the policies implied by these choices and rules to guide the behavior of autonomous systems.

From the Discussion

In the general case, we would strongly recommend input from experts (including ethicists, legal scholars, policymakers among others). Still, two facts remain: (1) views on life and death are emotionally driven, so it’s hard for people to accept some authority figure telling them how they should behave; (2) Even from an ethical perspective, it’s not always clear which view is the correct one. In such cases, when policy experts cannot reach a consensus, they may use citizens’ preferences as a tie-breaker. Doing so, helps reach a conclusive decision, it promotes values of democracy, it increases public acceptance of this technology (especially when it provides much better safety), and it promotes their sense of involvement and citizenship.  On the other hand, a full dependence on public input would always have the possibility for tyranny of the majority, among other issues raised above. This is why our proposed method provides a suitable approach that combines the utilization of citizen’s input with the responsible oversight by experts.

In this paper, we propose a framework that can help resolve conflicting moral values. In so doing, we exploit two decades of research in the representation and abstraction of values from cases in the service of abstracting and representing the values expressed in crowd-sourced data to the end of informing public policy. As a results, the resolution of competing values is produced in two forms: one that can be implemented in autonomous systems to guide their behavior, and a human-readable representation (policy) of these rules. At the core of this framework, is the collection of data from the
public.

Wednesday, July 8, 2020

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines (2020). 
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Conclusion

We have argued that to be able to contribute to pressing practical problems, the debate on AMA should be redirected to address outright normative ethical questions. Specifically, the questions of how and to what extent artificial entities should be involved in human practices where we normally assume moral agency and responsibility. The reason for our proposal is the high degree of conceptual confusion and lack of practical usefulness of the traditional AMA debate. And this reason seems especially strong in light of the current fast development and implementation of advanced, autonomous and self-evolving AI and robotic constructs.