Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, September 2, 2019

The Robotic Disruption of Morality

John Danaher
Philosophical Disquisitions
Originally published August 2, 2019

Here is an excerpt:

2. The Robotic Disruption of Human Morality

From my perspective, the most interesting aspect of Tomasello’s theory is the importance he places on the second personal psychology (an idea he takes from the philosopher Stephen Darwall). In essence, what he is arguing is that all of human morality — particularly the institutional superstructure that reinforces it — is premised on how we understand those with whom we interact. It is because we see them as intentional agents, who experience and understand the world in much the same way as we do, that we start to sympathise with them and develop complex beliefs about what we owe each other. This, in turn, was made possible by the fact that humans rely so much on each other to get things done.

This raises the intriguing question: what happens if we no longer rely on each other to get things done? What if our primary collaborative and cooperative partners are machines and not our fellow human beings? Will this have some disruptive impact on our moral systems?

The answer to this depends on what these machines are or, more accurately, what we perceive them to be. Do we perceive them to be intentional agents just like other human beings or are they perceived as something else — something different from what we are used to? There are several possibilities worth considering. I like to think of these possibilities as being arranged along a spectrum that classifies robots/AIs according to how autonomous or tool-like they perceived to be.

At one extreme end of the spectrum we have the perception of robots/AIs as tools, i.e. as essentially equivalent to hammers and wheelbarrows. If we perceive them to be tools, then the disruption to human morality is minimal, perhaps non-existent. After all, if they are tools then they are not really our collaborative partners; they are just things we use. Human actors remain in control and they are still our primary collaborative partners. We can sustain our second personal morality by focusing on the tool users and not the tools.

The blog post is here.

No comments: