Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Autonomous Systems. Show all posts
Showing posts with label Autonomous Systems. Show all posts

Wednesday, December 1, 2021

‘Yeah, we’re spooked’: AI starting to have big real-world impact

Nicola K. Davis
The Guardian
Originally posted 29 OCT 21

Here is an excerpt:

One concern is that a machine would not need to be more intelligent than humans in all things to pose a serious risk. “It’s something that’s unfolding now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.”

The upshot, he said, is that the algorithms manipulate the user, brainwashing them so that their behaviour becomes more predictable when it comes to what they chose to engage with, boosting click-based revenue.

Have AI researchers become spooked by their own success? “Yeah, I think we are increasingly spooked,” Russell said.

“It reminds me a little bit of what happened in physics where the physicists knew that atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms,” he said, noting that the experts always stressed the idea was theoretical. “And then it happened and they weren’t ready for it.”

The use of AI in military applications – such as small anti-personnel weapons – is of particular concern, he said. “Those are the ones that are very easily scalable, meaning you could put a million of them in a single truck and you could open the back and off they go and wipe out a whole city,” said Russell.

Russell believes the future for AI lies in developing machines that know the true objective is uncertain, as are our preferences, meaning they must check in with humans – rather like a butler – on any decision. But the idea is complex, not least because different people have different – and sometimes conflicting – preferences, and those preferences are not fixed.

Russell called for measures including a code of conduct for researchers, legislation and treaties to ensure the safety of AI systems in use, and training of researchers to ensure AI is not susceptible to problems such as racial bias. He said EU legislation that would ban impersonation of humans by machines should be adopted around the world.

Saturday, May 15, 2021

Moral zombies: why algorithms are not moral agents

VĂ©liz, C. 
AI & Soc (2021). 

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Conclusion

This paper has argued that moral zombies—creatures that behave like moral agents but lack sentience—are incoherent as moral agents. Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents. What I have dubbed ‘moral zombies’ are relevant because they are similar to algorithms in that they make moral decisions as human beings would—determining who gets which benefits and penalties—without having any concomitant sentience.

There might come a time when AI becomes so sophisticated that robots might possess desires and values of their own. It will not, however, be on account of their computational prowess, but on account of their sentience, which may in turn require some kind of embodiment. At present, we are far from creating sentient algorithms.

When algorithms cause moral havoc, as they often do, we must look to the human beings who designed, programmed, commissioned, implemented, and were supposed to supervise them to assign the appropriate blame. For all their complexity and flair, algorithms are nothing but tools, and moral agents are fully responsible for the tools they create and use.

Thursday, March 12, 2020

Artificial Intelligence in Health Care

M. Matheny, D. Whicher, & S. Israni
JAMA. 2020;323(6):509-510.
doi:10.1001/jama.2019.21579

The promise of artificial intelligence (AI) in health care offers substantial opportunities to improve patient and clinical team outcomes, reduce costs, and influence population health. Current data generation greatly exceeds human cognitive capacity to effectively manage information, and AI is likely to have an important and complementary role to human cognition to support delivery of personalized health care.  For example, recent innovations in AI have shown high levels of accuracy in imaging and signal detection tasks and are considered among the most mature tools in this domain.

However, there are challenges in realizing the potential for AI in health care. Disconnects between reality and expectations have led to prior precipitous declines in use of the technology, termed AI winters, and another such event is possible, especially in health care.  Today, AI has outsized market expectations and technology sector investments. Current challenges include using biased data for AI model development, applying AI outside of populations represented in the training and validation data sets, disregarding the effects of possible unintended consequences on care or the patient-clinician relationship, and limited data about actual effects on patient outcomes and cost of care.

AI in Healthcare: The Hope, The Hype, The Promise, The Peril, a publication by the National Academy of Medicine (NAM), synthesizes current knowledge and offers a reference document for the responsible development, implementation, and maintenance of AI in the clinical enterprise.  The publication outlines current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; presents an overview of the legal and regulatory landscape for health care AI; urges the prioritization of equity, inclusion, and a human rights lens for this work; and outlines considerations for moving forward. This Viewpoint shares highlights from the NAM publication.

The info is here.

Friday, January 3, 2020

Robotics researchers have a duty to prevent autonomous weapons

Christoffer Heckman
theconversation.com
Originally posted 4 Dec 19

Here is an excerpt:

As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing. Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful only about half of the time, and it was stuck there for years. Today, though, the best algorithms as shown in published papers are now at 86% accuracy. That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.

This kind of improvement is a true milestone from a technical perspective. Whereas in the past manually reviewing troves of video footage would require an incredible number of hours, now such data can be rapidly and accurately parsed by a computer program.

But it also gives rise to an ethical dilemma. In removing humans from the process, the assumptions that underpin the decisions related to privacy and security have been fundamentally altered. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.

The info is here.

Wednesday, September 11, 2019

How The Software Industry Must Marry Ethics With Artificial Intelligence

Christian Pedersen
Forbes.com
Originally posted July 15, 2019

Here is an excerpt:

Companies developing software used to automate business decisions and processes, military operations or other serious work need to address explainability and human control over AI as they weave it into their products. Some have started to do this.

As AI is introduced into existing software environments, those application environments can help. Many will have established preventive and detective controls and role-based security. They can track who made what changes to processes or to the data that feeds through those processes. Some of these same pathways can be used to document changes made to goals, priorities or data given to AI.

But software vendors have a greater opportunity. They can develop products that prevent bad use of AI, but they can also use AI to actively protect and aid people, business and society. AI can be configured to solve for anything from overall equipment effectiveness or inventory reorder point to yield on capital. Why not have it solve for nonfinancial, corporate social responsibility metrics like your environmental footprint or your environmental or economic impact? Even a common management practice like using a balanced scorecard could help AI strive toward broader business goals that consider the well-being of customers, employees, customers, suppliers and other stakeholders.

The info is here.

Monday, July 1, 2019

How do you teach a machine right from wrong? Addressing the morality within Artificial Intelligence

Joseph Brean
The Kingston Whig Standard
Originally published May 30, 2019

Here is an excerpt:

AI “will touch or transform every sector and industry in Canada,” the government of Canada said in a news release in mid-May, as it named 15 experts to a new advisory council on artificial intelligence, focused on ethical concerns. Their goal will be to “increase trust and accountability in AI while protecting our democratic values, processes and institutions,” and to ensure Canada has a “human-centric approach to AI, grounded in human rights, transparency and openness.”

It is a curious project, helping computers be more accountable and trustworthy. But here we are. Artificial intelligence has disrupted the basic moral question of how to assign responsibility after decisions are made, according to David Gunkel, a philosopher of robotics and ethics at Northern Illinois University. He calls this the “responsibility gap” of artificial intelligence.

“Who is able to answer for something going right or wrong?” Gunkel said. The answer, increasingly, is no one.

It is a familiar problem that is finding new expressions. One example was the 2008 financial crisis, which reflected the disastrous scope of automated decisions. Gunkel also points to the success of Google’s AlphaGo, a computer program that has beaten the world’s best players at the famously complex board game Go. Go has too many possible moves for a computer to calculate and evaluate them all, so the program uses a strategy of “deep learning” to reinforce promising moves, thereby approximating human intuition. So when it won against the world’s top players, such as top-ranked Ke Jie in 2017, there was confusion about who deserved the credit. Even the programmers could not account for the victory. They had not taught AlphaGo to play Go. They had taught it to learn Go, which it did all by itself.

The info is here.

Thursday, January 31, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

Caroline Lester
The New Yorker
Originally posted January 24, 2019

Here is an excerpt:

The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The info is here.

Sunday, September 9, 2018

People Are Averse to Machines Making Moral Decisions

Yochanan E. Bigman and Kurt Gray
In press, Cognition

Abstract

Do people want autonomous machines making moral decisions? Nine studies suggest that that
the answer is ‘no’—in part because machines lack a complete mind. Studies 1-6 find that people
are averse to machines making morally-relevant driving, legal, medical, and military decisions,
and that this aversion is mediated by the perception that machines can neither fully think nor
feel. Studies 5-6 find that this aversion exists even when moral decisions have positive outcomes.
Studies 7-9 briefly investigate three potential routes to increasing the acceptability of machine
moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’
perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9).
Although some of these routes show promise, the aversion to machine moral decision-making is
difficult to eliminate. This aversion may prove challenging for the integration of autonomous
technology in moral domains including medicine, the law, the military, and self-driving vehicles.

The research is here.

Thursday, May 17, 2018

Ethics must be at heart of Artificial Intelligence technology

The Irish Times
Originally posted April 16, 2018

Artificial Intelligence (AI) must never be given autonomous power to hurt, destroy or deceive humans, a parliamentary report has said.

Ethics need to be put at the centre of the development of the emerging technology, according to the House of Lords Artificial Intelligence Committee.

With Britain poised to become a world leader in the controversial technological field international safeguards need to be set in place, the study said.

Peers state that AI needs to be developed for the common good and that the “autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence”.

The information is here.

Friday, April 13, 2018

Computer Says "No": Part 1- Algorithm Bias

Jasmine Leonard
www.thersa.org
Originally published March 14, 2018

From the court room to the workplace, important decisions are increasingly being made by so-called "automated decision systems". Critics claim that these decisions are less scrutable than those made by humans alone, but is this really the case? In the first of a three-part series, Jasmine Leonard considers the issue of algorithmic bias and how it might be avoided.

Recent advances in AI have a lot of people worried about the impact of automation.  One automatable task that’s received a lot of attention of late is decision-making.  So-called “automated decision systems” are already being used to decide whether or not individuals are given jobs, loans or even bail.  But there’s a lack of understanding about how these systems work, and as a result, a lot of unwarranted concerns.  In this three-part series I attempt to allay three of the most widely discussed fears surrounding automated decision systems: that they’re prone to bias, impossible to explain, and that they diminish accountability.

Before we begin, it’s important to be clear just what we’re talking about, as the term “automated decision” is incredibly misleading.  It suggests that a computer is making a decision, when in reality this is rarely the case.  What actually happens in most examples of “automated decisions” is that a human makes a decision based on information generated by a computer.  In the case of AI systems, the information generated is typically a prediction about the likelihood of something happening; for instance, the likelihood that a defendant will reoffend, or the likelihood that an individual will default on a loan.  A human will then use this prediction to make a decision about whether or not to grant a defendant bail or give an individual a credit card.  When described like this, it seems somewhat absurd to say that these systems are making decisions.  I therefore suggest that we call them what they actually are: prediction engines.

Thursday, March 22, 2018

The Ethical Design of Intelligent Robots

Sunidhi Ramesh
The Neuroethics Blog
Originally published February 27, 2018

Here is an excerpt:

In a 2016 study, a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.

The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.

Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.

The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.

“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.

Friday, January 19, 2018

Why banning autonomous killer robots wouldn’t solve anything

Susanne Burri and Michael Robillard
aeon.com
Originally published December 19, 2017

Here is an excerpt:

For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.

To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car.

The article is here.

Thursday, January 11, 2018

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. 
IEEE, 2017.

Introduction

As the use and impact of autonomous and intelligent systems (A/IS) become pervasive, we need to establish societal and policy guidelines in order for such systems to remain human-centric, serving humanity’s values and ethical principles. These systems have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between people and technology that is needed for its fruitful, pervasive use in our daily lives.

To be able to contribute in a positive, non-dogmatic way, we, the techno-scientific communities, need to enhance our self-reflection, we need to have an open and honest debate around our imaginary, our sets of explicit or implicit values, our institutions, symbols and representations.

Eudaimonia, as elucidated by Aristotle, is a practice that defines human well-being as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.

Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age. Measuring and honoring the potential of holistic economic prosperity should become more important than pursuing one-dimensional goals like productivity increase or GDP growth.

The guidelines are here.

Friday, December 8, 2017

Autonomous future could question legal ethics

Becky Raspe
Cleveland Jewish News
Originally published November 21, 2017

Here is an excerpt:

Northman said he finds the ethical implications of an autonomous future interesting, but completely contradictory to what he learned in law school in the 1990s.

“People were expected to be responsible for their activities,” he said. “And as long as it was within their means to stop something or more tellingly anticipate a problem before it occurs, they have an obligation to do so. When you blend software over the top of that this level of autonomy, we are left with some difficult boundaries to try and assess where a driver’s responsibility starts or the software programmers continues on.”

When considering the ethics surrounding autonomous living, Paris referenced the “trolley problem.” The trolley problem goes as this: there is an automated vehicle operating on an open road, and ahead there are five people in the road and one person off to the side. The question here, Paris said, is should the vehicle consider traveling on and hitting the five people or will it swerve and hit just the one?

“When humans are driving vehicles, they are the moral decision makers that make those choices behind the wheel,” she said. “Can engineers program automated vehicles to replace that moral thought with an algorithm? Will they prioritize the five lives or that one person? There are a lot of questions and not too many solutions at this point. With these ethical dilemmas, you have to be careful about what is being implemented.”

The article is here.

Friday, September 15, 2017

Robots and morality

The Big Read (which is actually in podcast form)
The Financial Times
Originally posted August 2017

Now our mechanical creations can act independently, what happens when AI goes wrong? Where does moral, ethical and legal responsibility for robots lie — with the manufacturers, the programmers, the users or the robots themselves, asks John Thornhill. And who owns their rights?

Click on the link below to access the 13 minutes podcast.

Podcast is here.

Monday, September 4, 2017

Teaching A.I. Systems to Behave Themselves

Cade Metz
The New York Times
Originally published August 13, 2017

Here is an excerpt:

Many specialists in the A.I. field believe a technique called reinforcement learning — a way for machines to learn specific tasks through extreme trial and error — could be a primary path to artificial intelligence. Researchers specify a particular reward the machine should strive for, and as it navigates a task at random, the machine keeps close track of what brings the reward and what doesn’t. When OpenAI trained its bot to play Coast Runners, the reward was more points.

This video game training has real-world implications.

If a machine can learn to navigate a racing game like Grand Theft Auto, researchers believe, it can learn to drive a real car. If it can learn to use a web browser and other common software apps, it can learn to understand natural language and maybe even carry on a conversation. At places like Google and the University of California, Berkeley, robots have already used the technique to learn simple tasks like picking things up or opening a door.

All this is why Mr. Amodei and Mr. Christiano are working to build reinforcement learning algorithms that accept human guidance along the way. This can ensure systems don’t stray from the task at hand.

Together with others at the London-based DeepMind, a lab owned by Google, the two OpenAI researchers recently published some of their research in this area. Spanning two of the world’s top A.I. labs — and two that hadn’t really worked together in the past — these algorithms are considered a notable step forward in A.I. safety research.

The article is here.