Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label System 1. Show all posts
Showing posts with label System 1. Show all posts

Thursday, October 6, 2016

How Morality Changes in a Foreign Language

By Julie Sedivy
Scientific American
Originally published September 14, 2016

Here is an excerpt:

Why does it matter whether we judge morality in our native language or a foreign one? According to one explanation, such judgments involve two separate and competing modes of thinking—one of these, a quick, gut-level “feeling,” and the other, careful deliberation about the greatest good for the greatest number. When we use a foreign language, we unconsciously sink into the more deliberate mode simply because the effort of operating in our non-native language cues our cognitive system to prepare for strenuous activity. This may seem paradoxical, but is in line with findings that reading math problems in a hard-to-read font makes people less likely to make careless mistakes (although these results have proven difficult to replicate).

An alternative explanation is that differences arise between native and foreign tongues because our childhood languages vibrate with greater emotional intensity than do those learned in more academic settings. As a result, moral judgments made in a foreign language are less laden with the emotional reactions that surface when we use a language learned in childhood.

Tuesday, August 16, 2016

When It Comes to Empathy, Your Gut May Be Failing You

By Jesse Singal
The Science of Us
Originally posted July 26, 2016

Here is an excerpt:

If you want to understand what someone else is feeling, you don’t sit down and think rationally about it. Rather, you feel what they’re feeling; you infer it from the tone of their voice and the arch of their eyebrows and their body language. That’s the folk wisdom, at least. And this sort of logic, well, feels right. After all, we are constantly attempting to intuit the thoughts and feelings of those around us — around us, and the process usually feels pretty automatic.

(cut)

But what if this common sense is wrong? What if the way to better understand what someone else is feeling — to enhance your empathic accuracy, to use the term researchers use — is to sit down and think about it in a more rational, logical way?

The article is here.

Monday, July 18, 2016

How Language ‘Framing’ Influences Decision-Making

Observations
Association for Psychological Science
Published in 2016

The way information is presented, or “framed,” when people are confronted with a situation can influence decision-making. To study framing, people often use the “Asian Disease Problem.” In this problem, people are faced with an imaginary outbreak of an exotic disease and asked to choose how they will address the issue. When the problem is framed in terms of lives saved (or “gains”), people are given the choice of selecting:
Medicine A, where 200 out of 600 people will be saved
or
Medicine B, where there is a one-third probability that 600 people will be saved and a two-thirds probability that no one will be saved.
When the problem is framed in terms of lives lost (or “losses”), people are given the option of selecting:
Medicine A, where 400 out of 600 people will die
or
Medicine B, where there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.
Although in both problems Medicine A and Medicine B lead to the same outcomes, people are more likely to choose Medicine A when the problem is presented in terms of gains and to choose Medicine B when the problem is presented in terms of losses. This difference occurs because people tend to be risk averse when the problem is presented in terms of gains, but risk tolerant when it is presented in terms of losses.

The article is here.

Saturday, May 14, 2016

On the Source of Human Irrationality

Oaksford, Mike et al.
Trends in Cognitive Sciences , Volume 20 , Issue 5 , 336 - 344

Summary

Reasoning and decision making are error prone. This is often attributed to a fast, phylogenetically old System 1. It is striking, however, that perceptuo-motor decision making in humans and animals is rational. These results are consistent with perceptuo-motor strategies emerging in Bayesian brain theory that also appear in human data selection. People seem to have access, although limited, to unconscious generative models that can generalise to explain other verbal reasoning results. Error does not emerge predominantly from System 1, but rather seems to emerge from the later evolved System 2 that involves working memory and language. However language also sows the seeds of error correction by moving reasoning into the social domain. This reversal of roles suggests key areas of theoretical integration and new empirical directions.

Trends

System 1 is supposedly the main cause of human irrationality. However, recent work on animal decision making, human perceptuo-motor decision making, and logical intuitions shows that this phylogenetically older system is rational.

Bayesian brain theory has recently proposed perceptuo-motor strategies identical to strategies proposed in Bayesian approaches to conscious verbal reasoning, suggesting that similar generative models are available at both levels.

Recent approaches to conditional inference using causal Bayes nets confirm this account, which can also generalise to logical intuitions.

People have only imperfect access to System 1. Errors arise from inadequate interrogation of System 1, working memory limitations, and mis-description of our records of these interrogations. However, there is evidence that such errors may be corrected by moving reasoning to the social domain facilitated by language.

The article is here.

Monday, November 16, 2015

Believing What You Don’t Believe

By Jane L. Risen and David Nussbaum
The New York Times - Gray Matter
Originally published October 30, 2015

Here is an excerpt:

But as one of us, Professor Risen, discusses in a paper just published in Psychological Review, many instances of superstition and magical thinking indicate that the slow system doesn’t always behave this way. When people pause to reflect on the fact that their superstitious intuitions are irrational, the slow system, which is supposed to fix things, very often doesn’t do so. People can simultaneously recognize that, rationally, their superstitious belief is impossible, but persist in their belief, and their behavior, regardless. Detecting an error does not necessarily lead people to correct it.

This cognitive quirk is particularly easy to identify in the context of superstition, but it isn’t restricted to it. If, for example, the manager of a baseball team calls for an ill-advised sacrifice bunt, it is easy to assume that he doesn’t know that the odds indicate his strategy is likely to cost his team runs. But the manager may have all the right information; he may just choose not to use it, based on his intuition in that specific situation.

The entire article is here.

Believing What We Do Not Believe: Acquiescence to Superstitious Beliefs and Other Powerful Intuitions

By Risen, Jane L.
Psychological Review, Oct 19 , 2015

Abstract

Traditionally, research on superstition and magical thinking has focused on people’s cognitive shortcomings, but superstitions are not limited to individuals with mental deficits. Even smart, educated, emotionally stable adults have superstitions that are not rational. Dual process models—such as the corrective model advocated by Kahneman and Frederick (2002, 2005), which suggests that System 1 generates intuitive answers that may or may not be corrected by System 2—are useful for illustrating why superstitious thinking is widespread, why particular beliefs arise, and why they are maintained even though they are not true. However, to understand why superstitious beliefs are maintained even when people know they are not true requires that the model be refined. It must allow for the possibility that people can recognize—in the moment—that their belief does not make sense, but act on it nevertheless. People can detect an error, but choose not to correct it, a process I refer to as acquiescence. The first part of the article will use a dual process model to understand the psychology underlying magical thinking, highlighting features of System 1 that generate magical intuitions and features of the person or situation that prompt System 2 to correct them. The second part of the article will suggest that we can improve the model by decoupling the detection of errors from their correction and recognizing acquiescence as a possible System 2 response. I suggest that refining the theory will prove useful for understanding phenomena outside of the context of magical thinking.

The article is here.