Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 9, 2018

Technology and culture: Differences between the APA and ACA ethical codes

Firmin, M.W., DeWitt, K., Shell, A.L. et al.
Curr Psychol (2018). https://doi.org/10.1007/s12144-018-9874-y

Abstract

We conducted a section-by-section and line-by-line comparison of the ethical codes published by the American Psychological Association (APA) and the American Counseling Association (ACA). Overall, 144 differences exist between the two codes and, here we focus on two constructs where 36 significant differences exist: technology and culture. Of this number, three differences were direct conflicts between the APA and ACA ethical codes’ expectations for technology and cultural behavior. The other 33 differences were omissions in the APA code, meaning that specific elements in the ACA code were explicitly absent from the APA code altogether. Of the 36 total differences pertaining to technology and culture in the two codes, 27 differences relate to technology and APA does not address 25 of these 27 technology differences. Of the 36 total differences pertaining to technology and culture, nine differences relate to culture and APA does not address eight of these issues.

The information is here.

Learning from moral failure

Matthew Cashman & Fiery Cushman
In press: Becoming Someone New: Essays on Transformative Experience, Choice, and Change

Introduction

Pedagogical environments are often designed to minimize the chance of people acting wrongly; surely this is a sensible approach. But could it ever be useful to design pedagogical environments to permit, or even encourage, moral failure? If so, what are the circumstances where moral failure can be beneficial?  What types of moral failure are helpful for learning, and by what mechanisms? We consider the possibility that moral failure can be an especially effective tool in fostering learning. We also consider the obvious costs and potential risks of allowing or fostering moral failure. We conclude by suggesting research directions that would help to establish whether, when and how moral pedagogy might be facilitated by letting students learn from moral failure.

(cut)

Conclusion

Errors are an important source of learning, and educators often exploit this fact.  Failing helps to tune our sense of balance; Newtonian mechanics sticks better when we witness the failure of our folk physics. We consider the possibility that moral failure may also prompt especially strong or distinctive forms of learning.  First, and with greatest certainty, humans are designed to learn from moral failure through the feeling of guilt.  Second, and more speculatively, humans may be designed to experience moral failures by “testing limits” in a way that ultimately fosters an adaptive moral character.  Third—and highly speculatively—there may be ways to harness learning by moral failure in pedagogical contexts. Minimally, this might occur by imagination, observational learning, or the exploitation of spontaneous wrongful acts as “teachable moments”.

The book chapter is here.

Sunday, July 8, 2018

A Son’s Race to Give His Dying Father Artificial Immortality

James Vlahos
wired.com
Originally posted July 18, 2017

Here is an excerpt:

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

The article is here.

Yes, I saw the Black Mirror episode using a similar theme.

Saturday, July 7, 2018

Making better decisions in groups

Dan Bang, Chris D. Frith
Published 16 August 2017.
DOI: 10.1098/rsos.170193

Abstract

We review the literature to identify common problems of decision-making in individuals and groups. We are guided by a Bayesian framework to explain the interplay between past experience and new evidence, and the problem of exploring the space of hypotheses about all the possible states that the world could be in and all the possible actions that one could take. There are strong biases, hidden from awareness, that enter into these psychological processes. While biases increase the efficiency of information processing, they often do not lead to the most appropriate action. We highlight the advantages of group decision-making in overcoming biases and searching the hypothesis space for good models of the world and good solutions to problems. Diversity of group members can facilitate these achievements, but diverse groups also face their own problems. We discuss means of managing these pitfalls and make some recommendations on how to make better group decisions.

The article is here.

Friday, July 6, 2018

Can we collaborate with robots or will they take our place at work?

TU/e Research Project
ethicsandtechnology.eu

Here is an excerpt:

Finding ways to collaborate with robots

In this project, the aim is to understand how robotisation in logistics can be advanced whilst maintaining workers’ sense of meaning in work and general well-being, thereby preventing or undoing resilience towards robotisation. Sven Nyholm says: “People typically find work meaningful if they work within a well-functioning team or if they view their work as serving some larger purpose beyond themselves. Could human-robot collaborations be experienced as team-work? Would it be any kind of mistake to view a robot as a colleague? The thought of having a robot as a collaborator can seem a little weird. And yes, the increasingly robotized work environment is scary, but it is exciting at the same time. The further robotisation at work could give workers new important responsibilities and skills, which can in turn strengthen the feeling of doing meaningful work”.

The information in here.

People who think their opinions are superior to others are most prone to overestimating their relevant knowledge and ignoring chances to learn more

Tom Stafford
Blog Post: Research Digest
Originally posted May 31, 2018

Here is an excerpt:

Finally and more promisingly, the researchers found some evidence that belief superiority can be dented by feedback. If participants were told that people with beliefs like theirs tended to score poorly on topic knowledge, or if they were directly told that their score on the topic knowledge quiz was low, this not only reduced their belief superiority, it also caused them to seek out the kind of challenging information they had previously neglected in the headlines task (though the evidence for this behavioural effect was mixed).

The studies all involved participants accessed via Amazon’s Mechanical Turk, allowing the researchers to work with large samples of Americans for each experiment. Their findings mirror the well-known Dunning-Kruger effect – Kruger and Dunning showed that for domains such as judgments of grammar, humour or logic, the most skilled tend to underestimate their ability, while the least skilled overestimate it. Hall and Raimi’s research extends this to the realm of political opinions (where objective assessment of correctness is not available), showing that the belief your opinion is better than other people’s tends to be associated with overestimation of your relevant knowledge.

The article is here.

Thursday, July 5, 2018

Crispr Fans Fight for Egalitarian Access to Gene Editing

Megan Molteni
Wired.com
Originally posted June 6, 2018

Here is an excerpt:

Like any technology, the applications of gene editing tech will be shaped by the values of the societies that wield it. Which is why a conversation about equitable access to Crispr quickly becomes a conversation about redistributing some of the wealth and education that has been increasingly concentrated in smaller and smaller swaths of the population over the past three decades. Today the richest 1 percent of US families control a record-high 38.6 percent of the country’s wealth. The fear is that Crispr won’t disrupt current inequalities, it’ll just perpetuate them.

(cut)

CrisprCon excels at providing a platform to raise these kinds of big picture problems and moral quagmires. But in its second year, it was still light on solutions. The most concrete examples came from a panel of people pursuing ecotechnologies—genetic methods for changing, controlling, or even exterminating species in the wild (disclosure: I moderated the panel).

The information is here.

On the role of descriptive norms and subjectivism in moral judgment

Andrew E. Monroe, Kyle D. Dillon, Steve Guglielmo, Roy F. Baumeister
Journal of Experimental Social Psychology
Volume 77, July 2018, Pages 1-10.

Abstract

How do people evaluate moral actions, by referencing objective rules or by appealing to subjective, descriptive norms of behavior? Five studies examined whether and how people incorporate subjective, descriptive norms of behavior into their moral evaluations and mental state inferences of an agent's actions. We used experimental norm manipulations (Studies 1–2, 4), cultural differences in tipping norms (Study 3), and behavioral economic games (Study 5). Across studies, people increased the magnitude of their moral judgments when an agent exceeded a descriptive norm and decreased the magnitude when an agent fell below a norm (Studies 1–4). Moreover, this differentiation was partially explained via perceptions of agents' desires (Studies 1–2); it emerged only when the agent was aware of the norm (Study 4); and it generalized to explain decisions of trust for real monetary stakes (Study 5). Together, these findings indicate that moral actions are evaluated in relation to what most other people do rather than solely in relation to morally objective rules.

Highlights

• Five studies tested the impact of descriptive norms on judgments of blame and praise.

• What is usual, not just what is objectively permissible, drives moral judgments.

• Effects replicate even when holding behavior constant and varying descriptive norms.

• Agents had to be aware of a norm for it to impact perceivers' moral judgments.

• Effects generalize to explain decisions of trust for real monetary stakes.

The research is here.

Wednesday, July 4, 2018

Curiosity and What Equality Really Means

Atul Gawande
The New Yorker
Originally published June 2, 2018

Here is an excerpt:

We’ve divided the world into us versus them—an ever-shrinking population of good people against bad ones. But it’s not a dichotomy. People can be doers of good in many circumstances. And they can be doers of bad in others. It’s true of all of us. We are not sufficiently described by the best thing we have ever done, nor are we sufficiently described by the worst thing we have ever done. We are all of it.

Regarding people as having lives of equal worth means recognizing each as having a common core of humanity. Without being open to their humanity, it is impossible to provide good care to people—to insure, for instance, that you’ve given them enough anesthetic before doing a procedure. To see their humanity, you must put yourself in their shoes. That requires a willingness to ask people what it’s like in those shoes. It requires curiosity about others and the world beyond your boarding zone.

We are in a dangerous moment because every kind of curiosity is under attack—scientific curiosity, journalistic curiosity, artistic curiosity, cultural curiosity. This is what happens when the abiding emotions have become anger and fear. Underneath that anger and fear are often legitimate feelings of being ignored and unheard—a sense, for many, that others don’t care what it’s like in their shoes. So why offer curiosity to anyone else?

Once we lose the desire to understand—to be surprised, to listen and bear witness—we lose our humanity. Among the most important capacities that you take with you today is your curiosity. You must guard it, for curiosity is the beginning of empathy. When others say that someone is evil or crazy, or even a hero or an angel, they are usually trying to shut off curiosity. Don’t let them. We are all capable of heroic and of evil things. No one and nothing that you encounter in your life and career will be simply heroic or evil. Virtue is a capacity. It can always be lost or gained. That potential is why all of our lives are of equal worth.

The article is here.