Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Robots. Show all posts
Showing posts with label Robots. Show all posts

Friday, July 6, 2018

Can we collaborate with robots or will they take our place at work?

TU/e Research Project
ethicsandtechnology.eu

Here is an excerpt:

Finding ways to collaborate with robots

In this project, the aim is to understand how robotisation in logistics can be advanced whilst maintaining workers’ sense of meaning in work and general well-being, thereby preventing or undoing resilience towards robotisation. Sven Nyholm says: “People typically find work meaningful if they work within a well-functioning team or if they view their work as serving some larger purpose beyond themselves. Could human-robot collaborations be experienced as team-work? Would it be any kind of mistake to view a robot as a colleague? The thought of having a robot as a collaborator can seem a little weird. And yes, the increasingly robotized work environment is scary, but it is exciting at the same time. The further robotisation at work could give workers new important responsibilities and skills, which can in turn strengthen the feeling of doing meaningful work”.

The information in here.

Tuesday, June 19, 2018

British Public Fears the Day When "Computer Says No"

Jasper Hamill
The Metro
Originally published May 31, 2018

Governments and tech companies risk a popular backlash against artificial intelligence (AI) unless they open up about how it will be used, according to a new report.

A poll conducted for the Royal Society of Arts (RSA) revealed widespread concern that AI will create a ‘Computer Says No’ culture, in which crucial decisions are made automatically without consideration of individual circumstances.

If the public feels ‘victimised or disempowered’ by intelligent machines, they may resist the introduction of new technologies, even if it holds back progress which could benefit them, the report warned.

Fear of inflexible and unfeeling automatic decision-making was a greater concern than robots taking humans’ jobs among those taking part in a survey by pollsters YouGov for the RSA.

The information is here.

Thursday, June 14, 2018

Sex robots are coming. We might even fall in love with them.

Sean Illing
www.vox.com
Originally published May 11, 2018

Here is an excerpt:

Sean Illing: Your essay poses an interesting question: Is mutual love with a robot possible? What’s the answer?

Lily Eva Frank:

Our essay tried to explore some of the core elements of romantic love that people find desirable, like the idea of being a perfect match for someone or the idea that we should treasure the little traits that make someone unique, even those annoying flaws or imperfections.

The key thing is that we love someone because there’s something about being with them that matters, something particular to them that no one else has. And we make a commitment to that person that holds even when they change, like aging, for example.

Could a robot do all these things? Our answer is, in theory, yes. But only a very advanced form of artificial intelligence could manage it because it would have to do more than just perform as if it were a person doing the loving. The robot would have to have feelings and internal experiences. You might even say that it would have to be self-aware.

But that would leave open the possibility that the sex bot might not want to have sex with you, which sort of defeats the purpose of developing these technologies in the first place.

(cut)

I think people are weird enough that it is probably possible for them to fall in love with a cat or a dog or a machine that doesn’t reciprocate the feelings. A few outspoken proponents of sex dolls and robots claim they love them. Check out the testimonials page on the websites of sex doll manufactures; they say things like, “Three years later, I love her as much as the first day I met her.” I don’t want to dismiss these people’s reports.

The information is here.

Tuesday, June 12, 2018

Did Google Duplex just pass the Turing Test?

Lance Ulanoff
Medium.com
Originally published

Here is an excerpt:

In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider.

Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex?

Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more.

I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test.

It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar.

The information is here.

Friday, June 8, 2018

The pros and cons of having sex with robots

Karen Turner
www.vox.com
Originally posted January 18, 2018

Here is an excerpt:

Karen Turner: Where does sex robot technology stand right now?

Neil McArthur:

When people have this idea of a sex robot, they think it’s going to look like a human being, it’s gonna walk around and say seductive things and so on. I think that’s actually the slowest-developing part of this whole nexus of sexual technology. It will come — we are going to have realistic sex robots. But there are a few technical hurdles to creating humanoid robots that are proving fairly stubborn. Making them walk is one of them. And if you use Siri or any of those others, you know that AI is proving sort of stubbornly resistant to becoming realistic.

But I think that when you look more broadly at what’s happening with sexual technology, virtual reality in general has just taken off. And it’s being used in conjunction with something called teledildonics, which is kind of an odd term. But all it means is actual devices that you hook up to yourself in various ways that sync with things that you see onscreen. It’s truly amazing what’s going on.

(cut)

When you look at the ethical or philosophical considerations, — I think there’s two strands. One is the concerns people have, and two, which I think maybe doesn’t get as much attention, in the media at least, is the potential advantages.

The concerns have to do with the psychological impact. As you saw with those Apple shareholders [who asked Apple to help protect children from digital addiction], we’re seeing a lot of concern about the impact that technology is having on people’s lives right now. Many people feel that anytime you’re dealing with sexual technology, those sorts of negative impacts really become intensified — specifically, social isolation, people cutting themselves off from the world.

The article is here.

Thursday, June 7, 2018

Embracing the robot

John Danaher
aeon.co
Originally posted March 19, 2018

Here is an excerpt:

Contrary to the critics, I believe our popular discourse about robotic relationships has become too dark and dystopian. We overstate the negatives and overlook the ways in which relationships with robots could complement and enhance existing human relationships.

In Blade Runner 2049, the true significance of K’s relationship with Joi is ambiguous. It seems that they really care for each other, but this could be an illusion. She is, after all, programmed to serve his needs. The relationship is an inherently asymmetrical one. He owns and controls her; she would not survive without his good will. Furthermore, there is a third-party lurking in the background: she has been designed and created by a corporation, which no doubt records the data from her interactions, and updates her software from time to time.

This is a far cry from the philosophical ideal of love. Philosophers emphasise the need for mutual commitment in any meaningful relationship. It’s not enough for you to feel a strong, emotional attachment to another; they have to feel a similar attachment to you. Robots might be able to perform love, saying and doing all the right things, but performance is insufficient.

The information is here.

Friday, April 13, 2018

The Farmbots Are Coming

Matt Jancer
www.wired.com
Originally published March 9, 2018

The first fully autonomous ground vehicles hitting the market aren’t cars or delivery trucks—they’re ­robo­-farmhands. The Dot Power Platform is a prime example of an explosion in advanced agricultural technology, which Goldman Sachs predicts will raise crop yields 70 percent by 2050. But Dot isn’t just a tractor that can drive without a human for backup. It’s the Transformer of ag-bots, capable of performing 100-plus jobs, from hay baler and seeder to rock picker and manure spreader, via an ­arsenal of tool modules. And though the hulking machine can carry 40,000 pounds, it navigates fields with balletic precision.

The information is here.

Tuesday, April 10, 2018

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Sunday, April 8, 2018

Can Bots Help Us Deal with Grief?

Evan Selinger
Medium.com
Originally posted March 13, 2018

Here are two excerpts:

Muhammad is under no illusion that he’s speaking with the dead. To the contrary, Muhammad is quick to point out the simulation he created works well when generating scripts of predictable answers, but it has difficulty relating to current events, like a presidential election. In Muhammad’s eyes, this is a feature, not a bug.

Muhammad said that “out of good conscience” he didn’t program the simulation to be surprising, because that capability would deviate too far from the goal of “personality emulation.”

This constraint fascinates me. On the one hand, we’re all creatures of habit. Without habits, people would have to deliberate before acting every single time. This isn’t practically feasible, so habits can be beneficial when they function as shortcuts that spare us from paralysis resulting from overanalysis.

(cut)

The empty chair technique that I’m referring to was popularized by Friedrich Perls (more widely known as Fritz Perls), a founder of Gestalt therapy. The basic setup looks like this: Two chairs are placed near each other; a psychotherapy patient sits in one chair and talks to the other, unoccupied chair. When talking to the empty chair, the patient engages in role-playing and acts as if a person is seated right in front of her — someone to whom she has something to say. After making a statement, launching an accusation, or asking a question, the patient then responds to herself by taking on the absent interlocutor’s perspective.

In the case of unresolved parental issues, the dialog could have the scripted format of the patient saying something to her “mother,” and then having her “mother” respond to what she said, going back and forth in a dialog until something that seems meaningful happens. The prop of an actual chair isn’t always necessary, and the context of the conversations can vary. In a bereavement context, for example, a widow might ask the chair-as-deceased-spouse for advice about what to do in a troubling situation.

The article is here.

Tuesday, March 27, 2018

Neuroblame?

Stephen Rainey
Practical Ethics
Originally posted February 15, 2018

Here is an excerpt:

Rather than bio-mimetic prostheses, replacement limbs and so on, we can predict that technologies superior to the human body will be developed. Controlled by the brains of users, these enhancements will amount to extensions of the human body, and allow greater projection of human will and intentions in the world. We might imagine a cohort of brain controlled robots carrying out mundane tasks around the home, or buying groceries and so forth, all while the user gets on with something altogether more edifying (or does nothing at all but trigger and control their bots). Maybe a highly skilled, and well-practised, user could control legions of such bots, each carrying out separate tasks.

Before getting too carried away with this line of thought, it’s probably worth getting to the point. The issue worth looking at concerns what happens when things go wrong. It’s one thing to imagine someone sending out a neuro-controlled assassin-bot to kill a rival. Regardless of the unusual route taken, this would be a pretty simple case of causing harm. It would be akin to someone simply assassinating their rival with their own hands. However, it’s another thing to consider how sloppily framing the goal for a bot, such that it ends up causing harm, ought to be parsed.

The blog post is here.

Thursday, March 22, 2018

The Ethical Design of Intelligent Robots

Sunidhi Ramesh
The Neuroethics Blog
Originally published February 27, 2018

Here is an excerpt:

In a 2016 study, a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.

The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.

Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.

The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.

“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.

Monday, February 5, 2018

A Robot Goes to College

Lindsay McKenzie
Inside Higher Ed
Originally published December 21, 2017

A robot called Bina48 has successfully taken a course in the philosophy of love at Notre Dame de Namur University, in California.

According to course instructor William Barry, associate professor of philosophy and director of the Mixed Reality Immersive Learning and Research Lab at NDNU, Bina48 is the world’s first socially advanced robot to complete a college course, a feat he described as “remarkable.” The robot took part in class discussions, gave a presentation with a student partner and participated in a debate with students from another institution.

(cut)

Barry said that working with Bina48 had been a valuable experience for him and his students. “We need to get over our existential fear about robots and see them as an opportunity,” he said. “If we approach artificial intelligence with a sense of the dignity and sacredness of all life, then we will produce robots with those same values,” he said.

The information is here.

Wednesday, January 31, 2018

I Believe In Intelligent Design....For Robots

Matt Simon
Wired Magazine
Originally published January 3, 2018

Here is an excerpt:

Roboticists are honing their robots by essentially mimicking natural selection. Keep what works, throw out what doesn’t, to optimally adapt a robot to a particular job. “If we want to scrap something totally, we can do that,” says Nick Gravish, who studies the intersection of robotics and biology at UC San Diego. “Or we can take the best pieces from some design and put them in a new design and get rid of the things we don't need.” Think of it, then, like intelligent design—that follows the principles of natural selection.

The caveat being, biology is rather more inflexible than what roboticists are doing. After all, you can give your biped robot two extra limbs and turn it into a quadruped fairly quickly, while animals change their features—cave-dwelling species might lose their eyes, for instance—over thousands of years. “Evolution is as much a trap as a means to advance,” says Gerald Loeb, CEO and co-founder of SynTouch, which is giving robots the power to feel. “Because you get locked into a lot of hardware that worked well in previous iterations and now can't be changed because you've built your whole embryology on it.”

Evolution can still be rather explosive, though. 550 million years ago the Cambrian Explosion kicked off, giving birth to an incredible array of complex organisms. Before that, life was relatively squishier, relatively calmer. But then boom, predators a plenty, scrapping like hell to gain an edge.

The article is here.

Tuesday, December 26, 2017

When Morals Ain’t Enough: Robots, Ethics, and the Rules of the Law

Pagallo, U.
Minds & Machines (2017) 27: 625.
https://doi.org/10.1007/s11023-017-9418-5

Abstract

No single moral theory can instruct us as to whether and to what extent we are confronted with legal loopholes, e.g. whether or not new legal rules should be added to the system in the criminal law field. This question on the primary rules of the law appears crucial for today’s debate on roboethics and still, goes beyond the expertise of robo-ethicists. On the other hand, attention should be drawn to the secondary rules of the law: The unpredictability of robotic behaviour and the lack of data on the probability of events, their consequences and costs, make hard to determine the levels of risk and hence, the amount of insurance premiums and other mechanisms on which new forms of accountability for the behaviour of robots may hinge. By following Japanese thinking, the aim is to show why legally de-regulated, or special, zones for robotics, i.e. the secondary rules of the system, pave the way to understand what kind of primary rules we may want for our robots.

The article is here.

Should Robots Have Rights? Four Perspectives

John Danaher
Philosophical Disquisitions
Originally published October 31. 2017

Here is an excerpt:

The Four Positions on Robot Rights

Before I get into the four perspectives that Gunkel reviews, I’m going to start by asking a question that he does not raise (in this paper), namely: what would it mean to say that a robot has a ‘right’ to something? This is an inquiry into the nature of rights in the first place. I think it is important to start with this question because it is worth having some sense of the practical meaning of robot rights before we consider their entitlement to them.

I’m not going to say anything particularly ground-breaking. I’m going to follow the standard Hohfeldian account of rights — one that has been used for over 100 years. According to this account, rights claims — e.g. the claim that you have a right to privacy — can be broken down into a set of four possible ‘incidents’: (i) a privilege; (ii) a claim; (iii) a power; and (iv) an immunity. So, in the case of a right to privacy, you could be claiming one or more of the following four things:
  • Privilege: That you have a liberty or privilege to do as you please within a certain zone of privacy.

  • Claim: That others have a duty not to encroach upon you in that zone of privacy.

  • Power: That you have the power to waive your claim-right not to be interfered with in that zone of privacy.

  • Immunity: That you are legally protected against others trying to waive your claim-right on your behalf
As you can see, these four incidents are logically related to one another. Saying that you have a privilege to do X typically entails that you have a claim-right against others to stop them from interfering with that privilege. That said, you don’t need all four incidents in every case.

The blog post is here.

Tuesday, December 12, 2017

Regulation of AI: Not If But When and How

Ben Loewenstein
RSA.org
Originally published November 21, 2017

Here is an excerpt:

Firstly, AI is already embedded in today’s world, albeit in infant form. Fully autonomous vehicles are not for sale yet but self-parking cars have been in the market for years. We already rely on biometric technology like facial recognition to grant us entry into a country and robots are giving us banking advice.

Secondly, there is broad consensus that controls are needed. For example, a report issued last December by the office of former US President Barack Obama concluded that “aggressive policy action” would be required in the event of large job losses due to automation to ensure it delivers prosperity. If the American Government is no longer a credible source of accurate information for you, take the word of heavyweights like Bill Gates and Elon Musk, both of whom have called for AI to be regulated.

Finally, the building blocks of AI regulation are already looming in the form of rules like the European Union’s General Data Protection Regulation, which will take effect next year. The UK government’s independent review’s recommendations are also likely to become government policy. This means that we could see a regime established where firms within the same sector share data with each other under prescribed governance structures in an effort to curb the monopolies big tech companies currently enjoy on consumer information.

The latter characterises the threat facing the AI industry: the prospect of lawmakers making bold decisions that alter the trajectory of innovation. This is not an exaggeration.

The article is here.

Friday, December 8, 2017

Autonomous future could question legal ethics

Becky Raspe
Cleveland Jewish News
Originally published November 21, 2017

Here is an excerpt:

Northman said he finds the ethical implications of an autonomous future interesting, but completely contradictory to what he learned in law school in the 1990s.

“People were expected to be responsible for their activities,” he said. “And as long as it was within their means to stop something or more tellingly anticipate a problem before it occurs, they have an obligation to do so. When you blend software over the top of that this level of autonomy, we are left with some difficult boundaries to try and assess where a driver’s responsibility starts or the software programmers continues on.”

When considering the ethics surrounding autonomous living, Paris referenced the “trolley problem.” The trolley problem goes as this: there is an automated vehicle operating on an open road, and ahead there are five people in the road and one person off to the side. The question here, Paris said, is should the vehicle consider traveling on and hitting the five people or will it swerve and hit just the one?

“When humans are driving vehicles, they are the moral decision makers that make those choices behind the wheel,” she said. “Can engineers program automated vehicles to replace that moral thought with an algorithm? Will they prioritize the five lives or that one person? There are a lot of questions and not too many solutions at this point. With these ethical dilemmas, you have to be careful about what is being implemented.”

The article is here.

Tuesday, November 14, 2017

What is consciousness, and could machines have it?

Stanislas Dehaene, Hakwan Lau, & Sid Kouider
Science  27 Oct 2017: Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

The article is here.

Thursday, November 9, 2017

Morality and Machines

Robert Fry
Prospect
Originally published October 23, 2017

Here is an excerpt:

It is axiomatic that robots are more mechanically efficient than humans; equally they are not burdened with a sense of self-preservation, nor is their judgment clouded by fear or hysteria. But it is that very human fallibility that requires the intervention of the defining human characteristic—a moral sense that separates right from wrong—and explains why the ethical implications of the autonomous battlefield are so much more contentious than the physical consequences. Indeed, an open letter in 2015 seeking to separate AI from military application included the signatures of such luminaries as Elon Musk, Steve Wozniak, Stephen Hawking and Noam Chomsky. For the first time, therefore, human agency may be necessary on the battlefield not to take the vital tactical decisions but to weigh the vital moral ones.

So, who will accept these new responsibilities and how will they be prepared for the task? The first point to make is that none of this is an immediate prospect and it may be that AI becomes such a ubiquitous and beneficial feature of other fields of human endeavour that we will no longer fear its application in warfare. It may also be that morality will co-evolve with technology. Either way, the traditional military skills of physical stamina and resilience will be of little use when machines will have an infinite capacity for physical endurance. Nor will the quintessential commander’s skill of judging tactical advantage have much value when cognitive computing will instantaneously integrate sensor information. The key human input will be to make the judgments that link moral responsibility to legal consequence.

The article is here.

Tuesday, October 31, 2017

Who Is Rachael? Blade Runner and Personal Identity

Helen Beebee
iai news
Originally posted October 5, 2017

It’s no coincidence that a lot of philosophers are big fans of science fiction. Philosophers like to think about far-fetched scenarios or ‘thought experiments’, explore how they play out, and think about what light they can shed on how we should think about our own situation. What if you could travel back in time? Would you be able to kill your own grandfather, thereby preventing him from meeting your grandmother, meaning that you would never have been born in the first place? What if we could somehow predict with certainty what people would do? Would that mean that nobody had free will? What if I was really just a brain wired up to a sophisticated computer running virtual reality software? Should it matter to me that the world around me – including other people – is real rather than a VR simulation? And how do I know that it’s not?

Questions such as these routinely get posed in sci-fi books and films, and in a particularly vivid and thought-provoking way. In immersing yourself in an alternative version of reality, and by identifying or sympathising with the characters and seeing things from their point of view, you can often get a much better handle on the question. Philip K. Dick – whose Do Androids Dream of Electric Sheep?, first published in 1968, is the story on which the 1982 film Blade Runner is based –  was a master at exploring these kinds of philosophical questions. Often the question itself is left unstated; his characters are generally not much prone to philosophical rumination on their situation. But it’s there in the background nonetheless, waiting for you to find it and to think about what the answer might be.

Some of the questions raised by the original Dick story don’t get any, or much, attention in Blade Runner. Mercerism – the peculiar quasi-religion of the book, which is based on empathy and which turns out to be founded on a lie  – doesn’t get a mention in the film. And while, in the film as in the book, the capacity for empathy is what (supposedly) distinguishes humans from androids (or, in the film, replicants; apparently by 1982 ‘android’ was considered too dated a word), in the film we don’t get the suggestion that the purported significance of empathy, through its role in Mercerism, is really just a ploy: a way of making everyone think that androids lack, as it were, the essence of personhood, and hence can be enslaved and bumped off with impunity.

The article is here.