Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Robots. Show all posts
Showing posts with label Robots. Show all posts

Tuesday, February 27, 2024

Robot, let us pray! Can and should robots have religious functions? An ethical exploration of religious robots

Puzio, A.
AI & Soc (2023).
https://doi.org/10.1007/s00146-023-01812-z

Abstract

Considerable progress is being made in robotics, with robots being developed for many different areas of life: there are service robots, industrial robots, transport robots, medical robots, household robots, sex robots, exploration robots, military robots, and many more. As robot development advances, an intriguing question arises: should robots also encompass religious functions? Religious robots could be used in religious practices, education, discussions, and ceremonies within religious buildings. This article delves into two pivotal questions, combining perspectives from philosophy and religious studies: can and should robots have religious functions? Section 2 initiates the discourse by introducing and discussing the relationship between robots and religion. The core of the article (developed in Sects. 3 and 4) scrutinizes the fundamental questions: can robots possess religious functions, and should they? After an exhaustive discussion of the arguments, benefits, and potential objections regarding religious robots, Sect. 5 addresses the lingering ethical challenges that demand attention. Section 6 presents a discussion of the findings, outlines the limitations of this study, and ultimately responds to the dual research question. Based on the study’s results, brief criteria for the development and deployment of religious robots are proposed, serving as guidelines for future research. Section 7 concludes by offering insights into the future development of religious robots and potential avenues for further research.


Summary

Can robots fulfill religious functions? The article explores the technical feasibility of designing robots that could engage in religious practices, education, and ceremonies. It acknowledges the current limitations of robots, particularly their lack of sentience and spiritual experience. However, it also suggests potential avenues for development, such as robots equipped with advanced emotional intelligence and the ability to learn and interpret religious texts.

Should robots fulfill religious functions? This is where the ethical debate unfolds. The article presents arguments both for and against. On the one hand, robots could potentially offer various benefits, such as increasing accessibility to religious practices, providing companionship and spiritual guidance, and even facilitating interfaith dialogue. On the other hand, concerns include the potential for robotization of faith, the blurring of lines between human and machine in the context of religious experience, and the risk of reinforcing existing biases or creating new ones.

Ultimately, the article concludes that there is no easy answer to the question of whether robots should have religious functions. It emphasizes the need for careful consideration of the ethical implications and ongoing dialogue between religious communities, technologists, and ethicists. This ethical exploration paves the way for further research and discussion as robots continue to evolve and their potential roles in society expand.

Sunday, November 26, 2023

How robots can learn to follow a moral code

Neil Savage
Nature.com
Originally posted 26 OCT 23

Here is an excerpt:

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple.

One potential approach for dealing with differing opinions on moral issues is what Levine calls a moral parliament. “This problem of who gets to decide is not just a problem for AI. It’s a problem for governance of a society,” she says. “We’re looking to ideas from governance to help us think through these AI problems.” Similar to a political assembly or parliament, she suggests representing multiple different views in an AI system. “We can have algorithmic representations of different moral positions,” she says. The system would then attempt to calculate what the likely consensus would be on a given issue, based on a concept from game theory called cooperative bargaining.


Here is my summary:

Autonomous robots will need to be able to make ethical decisions in order to safely and effectively interact with humans and the world around them.

The article proposes a number of ways that robots can be taught to follow a moral code. One approach is to use supervised learning, in which robots are trained on a dataset of moral dilemmas and their corresponding solutions. Another approach is to use reinforcement learning, in which robots are rewarded for making ethical decisions and punished for making unethical decisions.

The article also discusses the challenges of teaching robots to follow a moral code. One challenge is that moral codes are often complex and nuanced, and it can be difficult to define them in a way that can be understood by a robot. Another challenge is that moral codes can vary across cultures, and it is important to develop robots that can adapt to different moral frameworks.

The article concludes by arguing that teaching robots to follow a moral code is an important ethical challenge that we need to address as we develop more sophisticated artificial intelligence systems.

Tuesday, March 22, 2022

Could we fall in love with robots?

Rich Wordsworth
eandt.theiet.org
Originally published 6 DEC 21

Here is an excerpt:

“So what are people’s expectations? They’re being fed a very particular idea of how [robot companions] should look. But when you start saying to people, ‘They can look like anything,’ then the imagination really opens up.”

Perhaps designing companion robots that deliberately don’t emulate human beings is the answer to that common sci-fi question of whether or not a relationship with a robot can ever be reciprocal. A robot with a Kindle for a head isn’t likely to hoodwink many people at the singles bar. When science fiction shows us robotic lovers, they are overwhelmingly portrayed as human (at least outwardly). This trips something defensive in us: the sense of unease or revulsion we feel when a non-human entity tries to deceive us into thinking that it’s human is such a common phenomenon (thanks largely to CGI in films and video games) that it has its own name: ‘the Uncanny Valley’. Perhaps in the future, the engineering of humanoid robots will progress to the point where we really can’t tell (without a signed waiver and a toolbox) whether a ‘person’ is flesh and blood or wires and circuitry. But in the meantime, maybe the best answer is simply not to bother attempting to emulate humans and explore the outlandish.

“You can form a friendship; you can form a bond,” says Devlin of non-humanlike machines. “That bond is one-way, but if the machine shows you any form of response, then you can project onto that and feel social. We treat machines socially because we are social creatures and it’s almost enough to make us buy into it. Not delusionally, but to suspend our disbelief and feel a connection. People feel connections with their vacuum cleaners: mine’s called Babbage and I watch him scurrying around, I pick him up, I tell him, ‘Don’t go there!’ It’s like having a robot pet – but I’m perfectly aware he’s just a lump of plastic. People talk to their Alexas when they’re lonely and they want to chat. So, yes: you can feel a bond there.

“It’s not the same as a human friendship: it’s a new social category that’s emerging that we haven’t really seen before.”

As for the question of reciprocity, Devlin doesn’t see a barrier there with robots that doesn’t already exist in human relationships.

“You’ll get a lot of people going, ‘Oh, that’s not true friendship; that’s not real.’,” Devlin says, sneeringly. “Well, if it feels real and if you’re happy in it, is that a problem? It’s the same people who say you can’t have true love unless it’s reciprocated, which is the biggest lie I’ve ever heard because there are so many people out there who are falling in love with people they’ve never even met! Fictional people! Film stars! Everybody! Those feelings are very, very valid to someone who’s experiencing them.”

“How are you guys doing here?” The waitress asks with perfect waitress-in-a-movie timing as Twombly and Catherine sit, processing the former’s new relationship with Samantha in silence.

“Fine,” Catherine blurts. “We’re fine. We used to be married but he couldn’t handle me; he wanted to put me on Prozac and now he’s madly in love with his laptop.”

In 2013, Spike Jonze’s script for ‘Her’ won the Academy Award for Best Screenplay (it was nominated for four others including Best Picture). A year later, Alex Garland’s script for ‘Ex Machina’ would be nominated for the same award while arguably presenting the same conclusion: we are a species that loves openly and to a fault. 

Thursday, February 24, 2022

Robot performs first laparoscopic surgery without human help (and outperformed human doctors)

Johns Hopkins University. (2022, January 26). 
ScienceDaily. Retrieved January 28, 2022

A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human -- a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.

"Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure," said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins' Whiting School of Engineering.

The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.

Working with collaborators at the Children's National Hospital in Washington, D.C. and Jin Kang, a Johns Hopkins professor of electrical and computer engineering, Krieger helped create the robot, a vision-guided system designed specifically to suture soft tissue. Their current iteration advances a 2016 model that repaired a pig's intestines accurately, but required a large incision to access the intestine and more guidance from humans.

The team equipped the STAR with new features for enhanced autonomy and improved surgical precision, including specialized suturing tools and state-of-the art imaging systems that provide more accurate visualizations of the surgical field.

Soft-tissue surgery is especially hard for robots because of its unpredictability, forcing them to be able to adapt quickly to handle unexpected obstacles, Krieger said. The STAR has a novel control system that can adjust the surgical plan in real time, just as a human surgeon would.

Saturday, May 1, 2021

Could you hate a robot? And does it matter if you could?

Ryland, H. 
AI & Soc (2021).
https://doi.org/10.1007/s00146-021-01173-5

Abstract

This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in this article make an important original contribution to the robo-philosophy literature, and particularly the literature on human–robot relationships (which typically only consider positive relationship types, e.g., love, friendship, etc.). Additionally, as explained at the end of the article, my discussions of robot hate could also have notable consequences for the emerging robot rights movement. Specifically, I argue that understanding human–robot relationships characterised by hate could actually help theorists argue for the rights of robots.

Conclusion

This article has argued for two claims. First, humans could be in relationships characterised by hate with morally considerable robots. Second, it matters that humans could hate these robots. This is at least partly because such hateful relations could have long-term negative effects for the robot (e.g., by encouraging bad will towards the robots). The article ended by explaining how discussions of human–robot relationships characterised by hate are connected to discussions of robot rights. I argued that the conditions for a robot being an object of hate and for having rights are the same—being sufficiently person-like. I then suggested how my discussions of human–robot relationships characterised by hate could be used to support, rather than undermine, the robot rights movement.

Sunday, August 16, 2020

Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective

Zhu, Q., Williams, T., Jackson, B. et al.
Sci Eng Ethics (2020).
https://doi.org/10.1007/s11948-020-00246-w

Abstract

Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.

Monday, February 10, 2020

Can Robots Reduce Racism And Sexism?

Kim Elsesser
Forbes.com
Originally posted 16 Jan 20

Robots are becoming a regular part of our workplaces, serving as supermarket cashiers and building our cars. More recently they’ve been tackling even more complicated tasks like driving and sensing emotions. Estimates suggest that about half of the work humans currently do will be automated by 2055, but there may be a silver lining to the loss of human jobs to robots. New research indicates that robots at work can help reduce prejudice and discrimination.

Apparently, just thinking about robot workers leads people to think they have more in common with other human groups according to research published in American Psychologist. When the study participants’ awareness of robot workers increased, they became more accepting of immigrants and people of a different religion, race, and sexual orientation.

Basically, the robots reduced prejudice by highlighting the existence of a group that is not human. Study authors, Joshua Conrad Jackson, Noah Castelo and Kurt Gray, summarized, “The large differences between humans and robots may make the differences between humans seem smaller than they normally appear. Christians and Muslims have different beliefs, but at least both are made from flesh and blood; Latinos and Asians may eat different foods, but at least they eat.” Instead of categorizing people by race or religion, thinking about robots made participants more likely to think of everyone as belonging to one human category.

The info is here.

Sunday, December 29, 2019

It Loves Me, It Loves Me Not Is It Morally Problematic to Design Sex Robots that Appear to Love Their Owners?

Sven Nyholm and Lily Eva Frank
Techné: Research in Philosophy and Technology
DOI: 10.5840/techne2019122110

Abstract

Drawing on insights from robotics, psychology, and human-computer interaction, developers of sex robots are currently aiming to create emotional bonds of attachment and even love between human users and their products. This is done by creating robots that can exhibit a range of facial expressions, that are made with human-like artificial skin, and that possess a rich vocabulary with many conversational possibilities. In light of the human tendency to anthropomorphize artifacts, we can expect that designers will have some success and that this will lead to the attribution of mental states to the robot that the robot does not actually have, as well as the inducement of significant emotional responses in the user. This raises the question of whether it might be ethically problematic to try to develop robots that appear to love their users. We discuss three possible ethical concerns about this aim: first, that designers may be taking advantage of users’ emotional vulnerability; second, that users may be deceived; and, third, that relationships with robots may block off the possibility of more meaningful relationships with other humans. We argue that developers should attend to the ethical constraints suggested by these concerns in their development of increasingly humanoid sex robots. We discuss two different ways in which they might do so.

Saturday, November 9, 2019

Debunking (the) Retribution (Gap)

Steven R. Kraaijeveld
Science and Engineering Ethics
https://doi.org/10.1007/s11948-019-00148-6

Abstract

Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufficiently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive intuitions that feed into retribution gaps are best understood as deontological intuitions. I apply a debunking argument for deontological intuitions in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted implicit biases, we can and should do so for unjustified retributive intuitions in cases of robot harm.

Tuesday, October 29, 2019

Should we create artificial moral agents? A Critical Analysis

John Danaher
Philosophical Disquisitions
Originally published September 21, 2019

Here is an excerpt:

So what argument is being made? At first, it might look like Sharkey is arguing that moral agency depends on biology, but I think that is a bit of a red herring. What she is arguing is that moral agency depends on emotions (particularly second personal emotions such as empathy, sympathy, shame, regret, anger, resentment etc). She then adds to this the assumption that you cannot have emotions without having a biological substrate. This suggests that Sharkey is making something like the following argument:

(1) You cannot have explicit moral agency without having second personal emotions.

(2) You cannot have second personal emotions without being constituted by a living biological substrate.

(3) Robots cannot be constituted by a living biological substrate.

(4) Therefore, robots cannot have explicit moral agency.

Assuming this is a fair reconstruction of the reasoning, I have some questions about it. First, taking premises (2) and (3) as a pair, I would query whether having a biological substrate really is essential for having second personal emotions. What is the necessary connection between biology and emotionality? This smacks of biological mysterianism or dualism to me, almost a throwback to the time when biologists thought that living creatures possessed some élan vital that separated them from the inanimate world. Modern biology and biochemistry casts all that into doubt. Living creatures are — admittedly extremely complicated — evolved biochemical machines. There is no essential and unbridgeable chasm between the living and the inanimate.

The info is here.

Monday, September 2, 2019

The Robotic Disruption of Morality

John Danaher
Philosophical Disquisitions
Originally published August 2, 2019

Here is an excerpt:

2. The Robotic Disruption of Human Morality

From my perspective, the most interesting aspect of Tomasello’s theory is the importance he places on the second personal psychology (an idea he takes from the philosopher Stephen Darwall). In essence, what he is arguing is that all of human morality — particularly the institutional superstructure that reinforces it — is premised on how we understand those with whom we interact. It is because we see them as intentional agents, who experience and understand the world in much the same way as we do, that we start to sympathise with them and develop complex beliefs about what we owe each other. This, in turn, was made possible by the fact that humans rely so much on each other to get things done.

This raises the intriguing question: what happens if we no longer rely on each other to get things done? What if our primary collaborative and cooperative partners are machines and not our fellow human beings? Will this have some disruptive impact on our moral systems?

The answer to this depends on what these machines are or, more accurately, what we perceive them to be. Do we perceive them to be intentional agents just like other human beings or are they perceived as something else — something different from what we are used to? There are several possibilities worth considering. I like to think of these possibilities as being arranged along a spectrum that classifies robots/AIs according to how autonomous or tool-like they perceived to be.

At one extreme end of the spectrum we have the perception of robots/AIs as tools, i.e. as essentially equivalent to hammers and wheelbarrows. If we perceive them to be tools, then the disruption to human morality is minimal, perhaps non-existent. After all, if they are tools then they are not really our collaborative partners; they are just things we use. Human actors remain in control and they are still our primary collaborative partners. We can sustain our second personal morality by focusing on the tool users and not the tools.

The blog post is here.

Monday, August 26, 2019

Psychological reactions to human versus robotic job replacement

Armin Granulo, Christoph Fuchs & Stefano Puntoni
Nature.com
Originally posted August 5, 2019

Abstract

Advances in robotics and artificial intelligence are increasingly enabling organizations to replace humans with intelligent machines and algorithms. Forecasts predict that, in the coming years, these new technologies will affect millions of workers in a wide range of occupations, replacing human workers in numerous tasks, but potentially also in whole occupations. Despite the intense debate about these developments in economics, sociology and other social sciences, research has not examined how people react to the technological replacement of human labour. We begin to address this gap by examining the psychology of technological replacement. Our investigation reveals that people tend to prefer workers to be replaced by other human workers (versus robots); however, paradoxically, this preference reverses when people consider the prospect of their own job loss. We further demonstrate that this preference reversal occurs because being replaced by machines, robots or software (versus other humans) is associated with reduced self-threat. In contrast, being replaced by robots is associated with a greater perceived threat to one’s economic future. These findings suggest that technological replacement of human labour has unique psychological consequences that should be taken into account by policy measures (for example, appropriately tailoring support programmes for the unemployed).

The info is here.

Sunday, August 4, 2019

First Steps Towards an Ethics of Robots and Artificial Intelligence

John Tasioulas
King's College London

Abstract

This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognize that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.

From the section: Ethical Questions: Frames and Levels

Difficult questions arise as to how best to integrate these three modes of regulating RAIs, and there is a serious worry about the tendency of industry-based codes of ethics to upstage democratically enacted law in this domain, especially given the considerable political clout wielded by the small number of technology companies that are driving RAI-related developments. However, this very clout creates the ever-present danger that powerful corporations may be able to shape any resulting laws in ways favourable to their interests rather than the common good (Nemitz 2018, 7). Part of the difficulty here stems from the fact that three levels of ethical regulation inter-relate in complex ways. For example, it may be that there are strong moral reasons against adults creating or using a robot as a sexual partner (third level). But, out of respect for their individual autonomy, they should be legally free to do so (first level). However, there may also be good reasons to cultivate a social morality that generally frowns upon such activities (second level), so that the sale and public display of sex robots is legally constrained in various ways (through zoning laws, taxation, age and advertising restrictions, etc.) akin to the legal restrictions on cigarettes or gambling (first level, again). Given this complexity, there is no a priori assurance of a single best way of integrating the three levels of regulation, although there will nonetheless be an imperative to converge on some universal standards at the first and second levels where the matter being addressed demands a uniform solution across different national jurisdictional boundaries.

The paper is here.

Saturday, August 3, 2019

When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Eddy Nahmias, Corey Allen, & Bradley Loveall
Georgia State University

From the Conclusion:

If future research bolsters our initial findings, then it would appear that when people consider whether agents are free and responsible, they are considering whether the agents have capacities to feel emotions more than whether they have conscious sensations or even capacities to deliberate or reason. It’s difficult to know whether people assume that phenomenal consciousness is required for or enhances capacities to deliberate and reason. And of course, we do not deny that cognitive capacities for self-reflection, imagination, and reasoning are crucial for free and responsible agency (see, e.g., Nahmias 2018). For instance, once considering agents that are assumed to have phenomenal consciousness, such as humans, it is likely that people’s attributions of free will and responsibility decrease in response to information that an agent has severely diminished reasoning capacities. But people seem to have intuitions that support the idea that an essential condition for free will is the capacity to experience conscious emotions.  And we find it plausible that these intuitions indicate that people take it to be essential to being a free agent that one can feel the emotions involved in reactive attitudes and in genuinely caring about one’s choices and their outcomes.

(cut)

Perhaps, fiction points us towards the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions.  No matter how intelligent or complex their behavior, the robots do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own or others’ deaths, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation about how humans treat them, or our feeling such attitudes towards them, for instance when they harm humans.

The research paper is here.

Monday, July 1, 2019

How do you teach a machine right from wrong? Addressing the morality within Artificial Intelligence

Joseph Brean
The Kingston Whig Standard
Originally published May 30, 2019

Here is an excerpt:

AI “will touch or transform every sector and industry in Canada,” the government of Canada said in a news release in mid-May, as it named 15 experts to a new advisory council on artificial intelligence, focused on ethical concerns. Their goal will be to “increase trust and accountability in AI while protecting our democratic values, processes and institutions,” and to ensure Canada has a “human-centric approach to AI, grounded in human rights, transparency and openness.”

It is a curious project, helping computers be more accountable and trustworthy. But here we are. Artificial intelligence has disrupted the basic moral question of how to assign responsibility after decisions are made, according to David Gunkel, a philosopher of robotics and ethics at Northern Illinois University. He calls this the “responsibility gap” of artificial intelligence.

“Who is able to answer for something going right or wrong?” Gunkel said. The answer, increasingly, is no one.

It is a familiar problem that is finding new expressions. One example was the 2008 financial crisis, which reflected the disastrous scope of automated decisions. Gunkel also points to the success of Google’s AlphaGo, a computer program that has beaten the world’s best players at the famously complex board game Go. Go has too many possible moves for a computer to calculate and evaluate them all, so the program uses a strategy of “deep learning” to reinforce promising moves, thereby approximating human intuition. So when it won against the world’s top players, such as top-ranked Ke Jie in 2017, there was confusion about who deserved the credit. Even the programmers could not account for the victory. They had not taught AlphaGo to play Go. They had taught it to learn Go, which it did all by itself.

The info is here.

Friday, May 24, 2019

Holding Robots Responsible: The Elements of Machine Morality

Y. Bingman, A. Waytz, R Alterovitz, and K. Gray
Trends in Cognitive Science

Abstract


As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will—plus anthropomorphism and the robot’s capacity for harm. We also consider questions of robot rights and moral decision-making.

Here is an excerpt:

Philosophy, law, and modern cognitive science all reveal that judgments of human moral responsibility hinge on autonomy. This explains why children, who seem to have less autonomy than adults, are held less responsible for wrongdoing. Autonomy is also likely crucial in judgments of robot moral responsibility. The reason people ponder and debate the ethical implications of drones and self-driving cars (but not tractors or blenders) is because these machines can act autonomously.

Admittedly, today’s robots have limited autonomy, but it is an expressed goal of roboticists to develop fully autonomous robots—machine systems that can act without human input. As robots become more autonomous their potential for moral responsibility will only grow. Even as roboticists create robots with more “objective” autonomy, we note that “subjective” autonomy may be more important: work in cognitive science suggest that autonomy and moral responsibility are more matters of perception than objective truths.

The info can be downloaded here.

Tuesday, April 30, 2019

Should animals, plants, and robots have the same rights as you?

Sigal Samuel
www.vox.com
Originally posted April 4, 2019

Here is an excerpt:

The moral circle is a fundamental concept among philosophers, psychologists, activists, and others who think seriously about what motivates people to do good. It was introduced by historian William Lecky in the 1860s and popularized by philosopher Peter Singer in the 1980s.

Now it’s cropping up more often in activist circles as new social movements use it to make the case for granting rights to more and more entities. Animals. Nature. Robots. Should they all get rights similar to the ones you enjoy? For example, you have the right not to be unjustly imprisoned (liberty) and the right not to be experimented on (bodily integrity). Maybe animals should too.

If you’re tempted to dismiss that notion as absurd, ask yourself: How do you decide whether an entity deserves rights?

Many people think that sentience, the ability to feel sensations like pain and pleasure, is the deciding factor. If that’s the case, what degree of sentience is required to make the cut? Maybe you think we should secure legal rights for chimpanzees and elephants — as the Nonhuman Rights Project is aiming to do — but not for, say, shrimp.

Some people think sentience is the wrong litmus test; they argue we should include anything that’s alive or that supports living things. Maybe you think we should secure rights for natural ecosystems, as the Community Environmental Legal Defense Fund is doing. Lake Erie won legal personhood status in February, and recent years have seen rights granted to rivers and forests in New Zealand, India, and Colombia.

The info is here.

Saturday, April 27, 2019

When Would a Robot Have Free Will?

Eddy Nahmias
The NeuroEthics Blog
Originally posted April 1, 2019

Here are two excerpts:

Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will.

(cut)

A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions.

The info is here.

Monday, March 25, 2019

U.S. companies put record number of robots to work in 2018

Reuters
Originally published February 28, 2019


U.S. companies installed more robots last year than ever before, as cheaper and more flexible machines put them within reach of businesses of all sizes and in more corners of the economy beyond their traditional foothold in car plants.

Shipments hit 28,478, nearly 16 percent more than in 2017, according to data seen by Reuters that was set for release on Thursday by the Association for Advancing Automation, an industry group based in Ann Arbor, Michigan.

Shipments increased in every sector the group tracks, except automotive, where carmakers cut back after finishing a major round of tooling up for new truck models.

The info is here.

Sunday, September 9, 2018

People Are Averse to Machines Making Moral Decisions

Yochanan E. Bigman and Kurt Gray
In press, Cognition

Abstract

Do people want autonomous machines making moral decisions? Nine studies suggest that that
the answer is ‘no’—in part because machines lack a complete mind. Studies 1-6 find that people
are averse to machines making morally-relevant driving, legal, medical, and military decisions,
and that this aversion is mediated by the perception that machines can neither fully think nor
feel. Studies 5-6 find that this aversion exists even when moral decisions have positive outcomes.
Studies 7-9 briefly investigate three potential routes to increasing the acceptability of machine
moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’
perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9).
Although some of these routes show promise, the aversion to machine moral decision-making is
difficult to eliminate. This aversion may prove challenging for the integration of autonomous
technology in moral domains including medicine, the law, the military, and self-driving vehicles.

The research is here.