Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Computer Programming. Show all posts
Showing posts with label Computer Programming. Show all posts

Wednesday, December 29, 2021

Delphi: Towards Machine Ethics and Norms

Jiang, L., et al. (2021). 
ArXiv, abs/2110.07574.

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state ("thou shalt not kill"), applying such rules to real-world situations is far more complex. For example, while "helping a friend" is generally a good thing to do, "helping a friend spread fake news" is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

Our paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present Commonsense Norm Bank, a moral textbook customized for machines, which compiles 1.7M examples of people's ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study provides new insights that lead to several important open research questions: differentiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

From the Conclusion

Delphi’s impressive performance on machine moral reasoning under diverse compositional real-life situations, highlights the importance of developing high-quality human-annotated datasets for people’s moral judgments. Finally, we demonstrate through systematic probing that Delphi still struggles with situations dependent on time or diverse cultures, and situations with social and demographic bias implications. We discuss the capabilities and limitations of Delphi throughout this paper and identify key directions in machine ethics for future work. We hope that our work opens up important avenues for future research in the emerging field of machine ethics, and we encourage collective efforts from our research community to tackle these research challenges.

Thursday, March 15, 2018

Computing and Moral Responsibility

Noorman, Merel
The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), Edward N. Zalta (ed.)

Traditionally philosophical discussions on moral responsibility have focused on the human components in moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today’s increasingly technological society, however, human activity cannot be properly understood without making reference to technological artifacts, which complicates the ascription of moral responsibility (Jonas 1984; Waelbers 2009). As we interact with and through these artifacts, they affect the decisions that we make and how we make them (Latour 1992). They persuade, facilitate and enable particular human cognitive processes, actions or attitudes, while constraining, discouraging and inhibiting others. For instance, internet search engines prioritize and present information in a particular order, thereby influencing what internet users get to see. As Verbeek points out, such technological artifacts are “active mediators” that “actively co-shape people’s being in the world: their perception and actions, experience and existence” (2006, p. 364). As active mediators, they change the character of human action and as a result it challenges conventional notions of moral responsibility (Jonas 1984; Johnson 2001).

Computing presents a particular case for understanding the role of technology in moral responsibility. As these technologies become a more integral part of daily activities, automate more decision-making processes and continue to transform the way people communicate and relate to each other, they further complicate the already problematic tasks of attributing moral responsibility. The growing pervasiveness of computer technologies in everyday life, the growing complexities of these technologies and the new possibilities that they provide raise new kinds of questions: who is responsible for the information published on the Internet? Who is responsible when a self-driving vehicle causes an accident? Who is accountable when electronic records are lost or when they contain errors? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?

The entry is here.

Saturday, March 10, 2018

Universities Rush to Roll Out Computer Science Ethics Courses

Natasha Singer
The New York Times
Originally posted February 12, 2018

Here is an excerpt:

“Technology is not neutral,” said Professor Sahami, who formerly worked at Google as a senior research scientist. “The choices that get made in building technology then have social ramifications.”

The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.

“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”

Computer science programs are required to make sure students have an understanding of ethical issues related to computing in order to be accredited by ABET, a global accreditation group for university science and engineering programs. Some computer science departments have folded the topic into a broader class, and others have stand-alone courses.

But until recently, ethics did not seem relevant to many students.

The article is here.

Wednesday, July 26, 2017

Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios

Leon R. Sütfeld, Richard Gast, Peter König and Gordon Pipa
Front. Behav. Neurosci., 05 July 2017

Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.

The article is here.

Friday, March 17, 2017

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

BEC CREW
Science Alert
Originally published February 13, 2017

Here is an excerpt:

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," one of the team, Joel Z Leibo, told Matt Burgess at Wired.

"Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents - two of them played as wolves, and one as the prey.

The article is here.

Friday, December 30, 2016

Programmers are having a huge discussion about the unethical and illegal things they’ve been asked to do

Julie Bort
Business Insider
Originally published November 20, 2016

Here is an excerpt:

He pointed out that "there are hints" that developers will increasingly face some real heat in the years to come. He cited Volkswagen America's CEO, Michael Horn, who at first blamed software engineers for the company's emissions cheating scandal during a Congressional hearing, claimed the coders had acted on their own "for whatever reason." Horn later resigned after US prosecutors accused the company of making this decision at the highest levels and then trying to cover it up.

But Martin pointed out, "The weird thing is, it was software developers who wrote that code. It was us. Some programmers wrote cheating code. Do you think they knew? I think they probably knew."

Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people.

But Sourour points out that it's not just about accidentally killing people or deliberately polluting the air. Software has already been used by Wall Street firms to manipulate stock quotes.

The article is here.

Monday, November 14, 2016

Walter Sinnott-Armstrong discusses artificial intelligence and morality

By Joyce Er
Duke Chronicle
Originally published October 25, 2016

How do we create artificial intelligence that serves mankind’s purposes? Walter Sinnott-Armstrong, Chauncey Stillman professor of practical ethics, led a discussion Monday on the subject.

Through an open discussion funded by the Future of Life Institute, Sinnott-Armstrong raised issues at the intersection of computer science and ethical philosophy. Among the tricky questions Sinnott-Armstrong tackled were programming artificial intelligence so that it would not eliminate the human race as well as the legal and moral issues involving self-driving cars.

Sinnott-Armstrong noted that artificial intelligence and morality are not as irreconcilable as some might believe, despite one being regarded as highly structured and the other seen as highly subjective. He highlighted various uses for artificial intelligence in resolving moral conflicts, such as improving criminal justice and locating terrorists.

The article is here.

Monday, August 22, 2016

Autonomous Vehicles Might Develop Superior Moral Judgment

John Martellaro
The Mac Observer
Originally published August 10, 2016

Here is an excerpt:

One of the virtues (or drawbacks, depending on one’s point of view) of a morality engine is that the decisions an autonomous vehicle makes can only be traced back only to software. That helps to absolve a car maker’s employees from direct liability when it comes life and death decisions by machine. That certainly seems to be an emerging trend in technology. The benefit is obvious. If a morality engine makes the right decision, by human standards, 99,995 times out of 100,000, the case for extreme damages due to systematic failure causing death is weak. Technology and society can move forward.

The article is here.

Tuesday, May 3, 2016

The Challenge of Determining Whether an A.I. Is Sentient

By Carissa Véliz
Slate.com
Originally posted April 14, 2016

Here is an excerpt:

Sentience is important because it warrants moral consideration. Whether we owe any moral consideration to things is controversial; things cannot be hurt, they have no interests, no preferences. Paraphrasing philosopher Thomas Nagel, there is nothing it is like for a thing to be a thing, an inanimate object. In contrast, there is something it is like to be a sentient being. There is a quality to experience; there is a comforting warmth in pleasure and a disagreeable sharpness in pain. There is something it is like to be thirsty, afraid, or joyful. Because sentient beings can feel, they can be hurt, they have an interest in experiencing wellbeing, and therefore we owe them moral consideration. Other things being equal, we ought not to harm them.

It is not easy to determine when an organism is sentient, however. A brief recount of past and present controversies and mistakes makes it clear that human beings are not great at recognizing sentience.

The article is here.