Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, July 10, 2018

The Artificial Intelligence Ethics Committee

Zara Stone
Forbes.com
Originally published June 11, 2018

Here is an excerpt:

Back to the ethics problem: Some sort of bias is sadly inevitable in programming. “We humans all have a bias,” said computer scientist Ehsan Hoque, who leads the Human-Computer Interaction Lab at Rochester University. “There’s a study where judges make more favorable decisions after a lunch break. Machines have an inherent bias (as they are built by humans) so we need to empower users in ways to make decisions.”

For instance, Walworth's way of empowering his choices is by being conscious about what AI algorithms show him. “I recommend you do things that are counterintuitive,” he said. “For instance, read a spectrum of news, everything from Fox to CNN and The New York Times to combat the algorithm that decides what you see.” Use the Cambridge Analytica election scandal as an example here. Algorithms dictated what you’d see, how you’d see it and if more of the same got shown to you, and were manipulated by Cambridge Analytica to sway voters.

The move to a consciousness of ethical AI  is both a top-down and bottoms up approach. “There’s a rising field of impact investing,” explained Walworth. “Investors and shareholders are demanding something higher than the bottom line, some accountability with the way they spend and invest money.”

The article is here.

Google to disclose ethical framework on use of AI

Richard Walters
The Financial Times
Originally published June 3, 2018

Here is an excerpt:

However, Google already uses AI in other ways that have drawn criticism, leading experts in the field and consumer activists to call on it to set far more stringent ethical guidelines that go well beyond not working with the military.

Stuart Russell, a professor of AI at the University of California, Berkeley, pointed to the company’s image search feature as an example of a widely used service that perpetuates preconceptions about the world based on the data in Google’s search index. For instance, a search for “CEOs” returns almost all white faces, he said.

“Google has a particular responsibility in this area because the output of its algorithms is so pervasive in the online world,” he said. “They have to think about the output of their algorithms as a kind of ‘speech act’ that has an effect on the world, and to work out how to make that effect beneficial.”

The information is here.

Monday, July 9, 2018

Technology and culture: Differences between the APA and ACA ethical codes

Firmin, M.W., DeWitt, K., Shell, A.L. et al.
Curr Psychol (2018). https://doi.org/10.1007/s12144-018-9874-y

Abstract

We conducted a section-by-section and line-by-line comparison of the ethical codes published by the American Psychological Association (APA) and the American Counseling Association (ACA). Overall, 144 differences exist between the two codes and, here we focus on two constructs where 36 significant differences exist: technology and culture. Of this number, three differences were direct conflicts between the APA and ACA ethical codes’ expectations for technology and cultural behavior. The other 33 differences were omissions in the APA code, meaning that specific elements in the ACA code were explicitly absent from the APA code altogether. Of the 36 total differences pertaining to technology and culture in the two codes, 27 differences relate to technology and APA does not address 25 of these 27 technology differences. Of the 36 total differences pertaining to technology and culture, nine differences relate to culture and APA does not address eight of these issues.

The information is here.

Learning from moral failure

Matthew Cashman & Fiery Cushman
In press: Becoming Someone New: Essays on Transformative Experience, Choice, and Change

Introduction

Pedagogical environments are often designed to minimize the chance of people acting wrongly; surely this is a sensible approach. But could it ever be useful to design pedagogical environments to permit, or even encourage, moral failure? If so, what are the circumstances where moral failure can be beneficial?  What types of moral failure are helpful for learning, and by what mechanisms? We consider the possibility that moral failure can be an especially effective tool in fostering learning. We also consider the obvious costs and potential risks of allowing or fostering moral failure. We conclude by suggesting research directions that would help to establish whether, when and how moral pedagogy might be facilitated by letting students learn from moral failure.

(cut)

Conclusion

Errors are an important source of learning, and educators often exploit this fact.  Failing helps to tune our sense of balance; Newtonian mechanics sticks better when we witness the failure of our folk physics. We consider the possibility that moral failure may also prompt especially strong or distinctive forms of learning.  First, and with greatest certainty, humans are designed to learn from moral failure through the feeling of guilt.  Second, and more speculatively, humans may be designed to experience moral failures by “testing limits” in a way that ultimately fosters an adaptive moral character.  Third—and highly speculatively—there may be ways to harness learning by moral failure in pedagogical contexts. Minimally, this might occur by imagination, observational learning, or the exploitation of spontaneous wrongful acts as “teachable moments”.

The book chapter is here.

Sunday, July 8, 2018

A Son’s Race to Give His Dying Father Artificial Immortality

James Vlahos
wired.com
Originally posted July 18, 2017

Here is an excerpt:

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

The article is here.

Yes, I saw the Black Mirror episode using a similar theme.

Saturday, July 7, 2018

Making better decisions in groups

Dan Bang, Chris D. Frith
Published 16 August 2017.
DOI: 10.1098/rsos.170193

Abstract

We review the literature to identify common problems of decision-making in individuals and groups. We are guided by a Bayesian framework to explain the interplay between past experience and new evidence, and the problem of exploring the space of hypotheses about all the possible states that the world could be in and all the possible actions that one could take. There are strong biases, hidden from awareness, that enter into these psychological processes. While biases increase the efficiency of information processing, they often do not lead to the most appropriate action. We highlight the advantages of group decision-making in overcoming biases and searching the hypothesis space for good models of the world and good solutions to problems. Diversity of group members can facilitate these achievements, but diverse groups also face their own problems. We discuss means of managing these pitfalls and make some recommendations on how to make better group decisions.

The article is here.

Friday, July 6, 2018

Can we collaborate with robots or will they take our place at work?

TU/e Research Project
ethicsandtechnology.eu

Here is an excerpt:

Finding ways to collaborate with robots

In this project, the aim is to understand how robotisation in logistics can be advanced whilst maintaining workers’ sense of meaning in work and general well-being, thereby preventing or undoing resilience towards robotisation. Sven Nyholm says: “People typically find work meaningful if they work within a well-functioning team or if they view their work as serving some larger purpose beyond themselves. Could human-robot collaborations be experienced as team-work? Would it be any kind of mistake to view a robot as a colleague? The thought of having a robot as a collaborator can seem a little weird. And yes, the increasingly robotized work environment is scary, but it is exciting at the same time. The further robotisation at work could give workers new important responsibilities and skills, which can in turn strengthen the feeling of doing meaningful work”.

The information in here.

People who think their opinions are superior to others are most prone to overestimating their relevant knowledge and ignoring chances to learn more

Tom Stafford
Blog Post: Research Digest
Originally posted May 31, 2018

Here is an excerpt:

Finally and more promisingly, the researchers found some evidence that belief superiority can be dented by feedback. If participants were told that people with beliefs like theirs tended to score poorly on topic knowledge, or if they were directly told that their score on the topic knowledge quiz was low, this not only reduced their belief superiority, it also caused them to seek out the kind of challenging information they had previously neglected in the headlines task (though the evidence for this behavioural effect was mixed).

The studies all involved participants accessed via Amazon’s Mechanical Turk, allowing the researchers to work with large samples of Americans for each experiment. Their findings mirror the well-known Dunning-Kruger effect – Kruger and Dunning showed that for domains such as judgments of grammar, humour or logic, the most skilled tend to underestimate their ability, while the least skilled overestimate it. Hall and Raimi’s research extends this to the realm of political opinions (where objective assessment of correctness is not available), showing that the belief your opinion is better than other people’s tends to be associated with overestimation of your relevant knowledge.

The article is here.

Thursday, July 5, 2018

Crispr Fans Fight for Egalitarian Access to Gene Editing

Megan Molteni
Wired.com
Originally posted June 6, 2018

Here is an excerpt:

Like any technology, the applications of gene editing tech will be shaped by the values of the societies that wield it. Which is why a conversation about equitable access to Crispr quickly becomes a conversation about redistributing some of the wealth and education that has been increasingly concentrated in smaller and smaller swaths of the population over the past three decades. Today the richest 1 percent of US families control a record-high 38.6 percent of the country’s wealth. The fear is that Crispr won’t disrupt current inequalities, it’ll just perpetuate them.

(cut)

CrisprCon excels at providing a platform to raise these kinds of big picture problems and moral quagmires. But in its second year, it was still light on solutions. The most concrete examples came from a panel of people pursuing ecotechnologies—genetic methods for changing, controlling, or even exterminating species in the wild (disclosure: I moderated the panel).

The information is here.