Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, November 4, 2019

Ethical Algorithms: Promise, Pitfalls and a Path Forward

Image result for ethical algorithmJay Van Bavel, Tessa West, Enrico Bertini, and Julia Stoyanovich
PsyArXiv Preprints
Originally posted October 21, 2019

Abstract

Fairness in machine-assisted decision making is critical to consider, since a lack of fairness can harm individuals or groups, erode trust in institutions and systems, and reinforce structural discrimination. To avoid making ethical mistakes, or amplifying them, it is important to ensure that the algorithms we develop are fair and promote trust. We argue that the marriage of techniques from behavioral science and computer science is essential to develop algorithms that make ethical decisions and ensure the welfare of society. Specifically, we focus on the role of procedural justice, moral cognition, and social identity in promoting trust in algorithms and offer a road map for future research on the topic.

--

The increasing role of machine-learning and algorithms in decision making has revolutionized areas ranging from the media to medicine to education to industry. As the recent One Hundred Year Study on Artificial Intelligence (Stone et al., 2016) reported: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Therefore, the effective development and widespread adoption of algorithms will hinge not only on the sophistication of engineers and computer scientists, but also on the expertise of behavioural scientists.

These algorithms hold enormous promise for solving complex problems, increasing efficiency, reducing bias, and even making decision-making transparent. However, the last few decades of behavioral science have established that humans hold a number of biases and shortcomings that impact virtually every sphere of human life (Banaji& Greenwald, 2013) and discrimination can become entrenched, amplified, or even obscured when decisions are implemented by algorithms (Kleinberg, Ludwig, Mullainathan, & Sunstein, 2018). While there has been a growing awareness that programmers and organizations should pay greater attention to discrimination and other ethical considerations (Dignum, 2018), very little behavioral research has directly examined these issues.  In  this  paper,  we  describe  how  behavioural  science  will  play  a  critical  role  in  the development  of  ethical  algorithms  and  outline  a  roadmap  for behavioural  scientists  and computer scientists to ensure that these algorithms are as ethical as possible.

The paper is here.