Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Machines. Show all posts
Showing posts with label Machines. Show all posts

Friday, May 11, 2018

Samantha’s suffering: why sex machines should have rights too

Victoria Brooks
The Conversation
Originally posted April 5, 2018

Here is the conclusion:

Machines are indeed what we make them. This means we have an opportunity to avoid assumptions and prejudices brought about by the way we project human feelings and desires. But does this ethically entail that robots should be able to consent to or refuse sex, as human beings would?

The innovative philosophers and scientists Frank and Nyholm have found many legal reasons for answering both yes and no (a robot’s lack of human consciousness and legal personhood, and the “harm” principle, for example). Again, we find ourselves seeking to apply a very human law. But feelings of suffering outside of relationships, or identities accepted as the “norm”, are often illegitimised by law.

So a “legal” framework which has its origins in heteronormative desire does not necessarily construct the foundation of consent and sexual rights for robots. Rather, as the renowned post-human thinker Rosi Braidotti argues, we need an ethic, as opposed to a law, which helps us find a practical and sensitive way of deciding, taking into account emergences from cross-species relations. The kindness and empathy we feel toward Samantha may be a good place to begin.

The article is here.

Monday, August 7, 2017

Attributing Agency to Automated Systems: Reflectionson Human–Robot Collaborations and Responsibility-Loci

Sven Nyholm
Science and Engineering Ethics
pp 1–19

Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

The article is here.

Wednesday, July 29, 2015

Machine ethics: The robot’s dilemma

Working out how to build ethical robots is one of the thorniest challenges in artificial intelligence.

By Boer Deng
Nature
01 July 2015

Here is an excerpt:

Advocates argue that the rule-based approach has one major virtue: it is always clear why the machine makes the choice that it does, because its designers set the rules. That is a crucial concern for the US military, for which autonomous systems are a key strategic goal. Whether machines assist soldiers or carry out potentially lethal missions, “the last thing you want is to send an autonomous robot on a military mission and have it work out what ethical rules it should follow in the middle of things”, says Ronald Arkin, who works on robot ethics software at Georgia Institute of Technology in Atlanta. If a robot had the choice of saving a soldier or going after an enemy combatant, it would be important to know in advance what it would do.

With support from the US defence department, Arkin is designing a program to ensure that a military robot would operate according to international laws of engagement. A set of algorithms called an ethical governor computes whether an action such as shooting a missile is permissible, and allows it to proceed only if the answer is 'yes'.

In a virtual test of the ethical governor, a simulation of an unmanned autonomous vehicle was given a mission to strike enemy targets — but was not allowed to do so if there were buildings with civilians nearby. Given scenarios that varied the location of the vehicle relative to an attack zone and civilian complexes such as hospitals and residential buildings, the algorithms decided when it would be permissible for the autonomous vehicle to accomplish its mission.

The entire article is here.