Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Artificial Moral Agents. Show all posts
Showing posts with label Artificial Moral Agents. Show all posts

Wednesday, July 8, 2020

A Normative Approach to Artificial Moral Agency

Behdadi, D., Munthe, C.
Minds & Machines (2020). 
https://doi.org/10.1007/s11023-020-09525-8

Abstract

This paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.

Conclusion

We have argued that to be able to contribute to pressing practical problems, the debate on AMA should be redirected to address outright normative ethical questions. Specifically, the questions of how and to what extent artificial entities should be involved in human practices where we normally assume moral agency and responsibility. The reason for our proposal is the high degree of conceptual confusion and lack of practical usefulness of the traditional AMA debate. And this reason seems especially strong in light of the current fast development and implementation of advanced, autonomous and self-evolving AI and robotic constructs.

Friday, April 17, 2020

Toward equipping Artificial Moral Agents with multiple ethical theories

George Rautenbach and C. Maria Keet
arXiv:2003.00935v1 [cs.CY] 2 Mar 2020

Abstract

Artificial Moral Agents (AMA’s) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do. Researchers have proposed theoretical means of creating such machines, while philosophers have made arguments as to how these machines ought to behave, or whether they should even exist.

Of the currently theorised AMA’s, all research and design has been done with either none or at most one specified normative ethical theory as basis. This is problematic because it narrows down the AMA’s functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with (thereby undermining an AMA’s ability to be moral in a human sense). As solution we design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning. Four specific ethical norms (Kantianism, divine command theory, utilitarianism, and egoism) were modelled and evaluated as proof of concept for normative modelling. Furthermore, all models were serialised to XML/XSD as proof of support for computerisation.

From the Discussion:

A big philosophical grey area in AMA’s is with regards to agency. That is, an entity’s ability to
understand available actions and their moral values and to freely choose between them. Whether
or not machines can truly understand their decisions and whether they can be held accountable
for them is a matter of philosophical discourse. Whatever the answer may be, AMA agency
poses a difficult question that must be addressed.

The question is as follows: should the machine act as an agent itself, or should it act as an informant for another agent? If an AMA reasons for another agent (e.g., a person) then reasoning will be done with that person as the actor and the one who holds responsibility. This has the disadvantage of putting that person’s interest before other morally considerable entities, especially with regards to ethical theories like egoism. Making the machine the moral agent has the advantage of objectivity where multiple people are concerned, but makes it harder to assign blame for its actions - a machine does not care for imprisonment or even disassembly. A Luddite would say it has no incentive to do good to humanity. Of course, a deterministic machine does not need incentive at all, since it will always behave according to the theory it is running. This lack of fear or “personal interest” can be good, because it ensures objective reasoning and fair consideration of affected parties.

The paper is here.

Tuesday, October 29, 2019

Should we create artificial moral agents? A Critical Analysis

John Danaher
Philosophical Disquisitions
Originally published September 21, 2019

Here is an excerpt:

So what argument is being made? At first, it might look like Sharkey is arguing that moral agency depends on biology, but I think that is a bit of a red herring. What she is arguing is that moral agency depends on emotions (particularly second personal emotions such as empathy, sympathy, shame, regret, anger, resentment etc). She then adds to this the assumption that you cannot have emotions without having a biological substrate. This suggests that Sharkey is making something like the following argument:

(1) You cannot have explicit moral agency without having second personal emotions.

(2) You cannot have second personal emotions without being constituted by a living biological substrate.

(3) Robots cannot be constituted by a living biological substrate.

(4) Therefore, robots cannot have explicit moral agency.

Assuming this is a fair reconstruction of the reasoning, I have some questions about it. First, taking premises (2) and (3) as a pair, I would query whether having a biological substrate really is essential for having second personal emotions. What is the necessary connection between biology and emotionality? This smacks of biological mysterianism or dualism to me, almost a throwback to the time when biologists thought that living creatures possessed some élan vital that separated them from the inanimate world. Modern biology and biochemistry casts all that into doubt. Living creatures are — admittedly extremely complicated — evolved biochemical machines. There is no essential and unbridgeable chasm between the living and the inanimate.

The info is here.

Monday, June 11, 2018

Can Morality Be Engineered In Artificial General Intelligence Systems?

Abhijeet Katte
Analytics India Magazine
Originally published May 10, 2018

Here is an excerpt:

This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.

Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have  a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.

The information is here.

Tuesday, December 27, 2016

Artificial moral agents: creative, autonomous and social. An approach based on evolutionary computation

Ioan Muntean and Don Howard
Frontiers in Artificial Intelligence and Applications
Volume 273: Sociable Robots and the Future of Social Relations

Abstract

In this paper we propose a model of artificial normative agency that accommodates some social competencies that we expect from artificial moral agents. The artificial moral agent (AMA) discussed here is based on two components: (i) a version of virtue ethics of human agents (VE) adapted to artificial agents, called here “virtual virtue ethics” (VVE); and (ii) an implementation based on evolutionary computation (EC), more concretely genetic algorithms. The reasons to choose VVE and EC are related to two elements that are, we argue, central to any approach to artificial morality: autonomy and creativity. The greater the autonomy an artificial agent has, the more it needs moral standards. In the virtue ethics, each agent builds her own character in time; creativity comes in degrees as the individual becomes morally competent. The model of an autonomous and creative AMA thus implemented is called GAMA= Genetic(-inspired) Autonomous Moral Agent. First, unlike the majority of other implementations of machine ethics, our model is more agent-centered, than action-centered; it emphasizes the developmental and behavioral aspects of the ethical agent. Second, in our model, the AMA does not make decisions exclusively and directly by following rules or by calculating the best outcome of an action. The model incorporates rules as initial data (as the initial population of the genetic algorithms) or as correction factors, but not as the main structure of the algorithm. Third, our computational model is less conventional, or at least it does not fall within the Turing tradition in computation. Genetic algorithms are excellent searching tools that avoid local minima and generate solutions based on previous results. In the GAMA model, only prospective at this stage, the VVE approach to ethics is better implemented by EC. Finally, the GAMA agents can display sociability through competition among the best moral actions and the desire to win the competition. Both VVE and EC are more adequate to a “social approach” to AMA when compared to the standard approaches. The GAMA is more promising a “moral and social artificial agent”.

The article is here.