Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Automaticity. Show all posts
Showing posts with label Automaticity. Show all posts

Saturday, June 4, 2016

Scientists show how we start stereotyping the moment we see a face

Sarah Kaplan
The Independent
Originally posted May 2, 2016

Scientists have known for a while that stereotypes warp our perceptions of things. Implicit biases — those unconscious assumptions that worm their way into our brains, without our full awareness and sometimes against our better judgment — can influence grading choices from teachers, split-second decisions by police officers and outcomes in online dating.

We can't even see the world without filtering it through the lens of our assumptions, scientists say. In a study published Monday in the journal Nature Neuroscience, psychologists report that the neurons that respond to things such as sex, race and emotion are linked by stereotypes, distorting the way we perceive people's faces before that visual information even reaches our conscious brains.

The article is here.

Friday, May 6, 2016

Complex ideas can enter consciousness automatically

Science Daily
Originally posted April 18, 2016

Summary

New research provides further evidence for 'passive frame theory,' the groundbreaking idea that suggests human consciousness is less in control than previously believed. The study shows that even complex concepts, such as translating a word into pig latin, can enter your consciousness automatically, even when someone tells you to avoid thinking about it. The research provides the first evidence that even a small amount of training can cause unintentional, high-level symbol manipulation.

Here is an excerpt:

This surprising effect offers further evidence that the contents of our consciousness -- the state of being awake and aware of our surroundings -- are often generated involuntarily, said Morsella, an assistant professor of psychology. In fact, the study published in the journal Acta Psychologica provides the first demonstration that even a small amount of training can cause unintentional, high-level symbol manipulation.

The article is here.

Monday, November 30, 2015

Moral cleansing and moral licenses: experimental evidence

Pablo Brañas-Garzaa, Marisa Buchelia, María Paz Espinosa and Teresa García-Muñoz
Economics and Philosophy / Volume 29 / Special Issue 02 / July 2013, pp 199-212

ABSTRACT

Research on moral cleansing and moral self-licensing has introduced dynamic considerations in the theory of moral behavior. Past bad actions trigger negative feelings that make people more likely to engage in future moral behavior to offset them. Symmetrically, past good deeds favor a positive self-perception that creates licensing effects, leading people to engage in behavior that is less likely to be moral. In short, a deviation from a “normal state of being” is balanced with a subsequent action that compensates the prior behavior. We model the decision of an individual trying to reach the optimal level of moral self-worth over time and show that under certain conditions the optimal sequence of actions follows a regular pattern which combines good and bad actions. We conduct an economic experiment where subjects play a sequence of giving decisions (dictator games) to explore this phenomenon. We find that donation in the previous period affects present decisions and the sign is negative: participants’ behavior in every round is negatively correlated to what they did in the past. Hence donations over time seem to be the result of a regular pattern of self-regulation: moral licensing (being selfish after altruist) and cleansing (altruistic after selfish).

The entire article is here.

Thursday, October 29, 2015

Choosing Empathy

A Conversation with Jamil Zaki
The Edge
Originally published October 19, 2015

Here are some excerpts:

The first narrative is that empathy is automatic. This goes all the way back to Adam Smith, who, to me, generated the first modern account of empathy in his beautiful book, The Theory of Moral Sentiments. Smith described what he called the "fellow-feeling," through which people take on each other's states—very similar to what I would call experience sharing.              

(cut)

That's one narrative, that empathy is automatic, and again, it’s compelling—backed by lots of evidence. But if you believe that empathy always occurs automatically, you run into a freight train of evidence to the contrary. As many of us know, there are lots of instances in which people could feel empathy, but don't. The prototype case here is intergroup settings. People who are divided by a war, or a political issue, or even a sports rivalry, often experience a collapse of their empathy. In many cases, these folks feel apathy for others on the other side of a group boundary. They fail to share, or think about, or feel concern for those other people's emotions.              

In other cases, it gets even worse: people feel overt antipathy towards others, for instance, taking pleasure when some misfortune befalls someone on the other side of a group boundary. What's interesting to me is that this occurs not only for group boundaries that are meaningful, like ethnicity or religion, but totally arbitrary groups. If I were to divide us into a red and blue team, without that taking on any more significance, you would be more likely to experience empathy for fellow red team members than for me (apparently I'm on team blue today).  

The entire post and video is here.

Friday, June 5, 2015

The thought father: Psychologist Daniel Kahneman on luck

By Richard Godwin
The London Evening Standard
Originally published March 18, 2014

Here are two excerpt:

Through a series of zany experiments involving roulette wheels and loaded dice, Tversky and Kahneman showed just how easily we can be led into making irrational decisions — even judges sentencing criminals were influenced by being shown completely random numbers. They also showed the sinister effects of priming (how, when people are “primed” with images of money, they behave in a more selfish way). Many such mental illusions still have an effect when subjects are explicitly warned to look out for them. “If it feels right, we go along with it,” as Kahneman says. It is usually afterwards that we engage our System 2s if at all, to provide reasons for acting as we did after the fact.

(cut)

Do teach yourself to think long-term. The “focusing illusion” makes the here and now appear the most pressing concern but that can lead to skewed results.

Do be fair. Research shows that employers who are unjust are punished by reduced productivity, and unfair prices lead to a loss in sales.

Do co-operate. What Kahneman calls “bias blindness” means it’s easier to recognise the errors of others than our own so ask for constructive criticism and be prepared to call out others on what they could improve.

The entire article is here.


Tuesday, March 3, 2015

Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics*

By Joshua Greene
Forthcoming in Ethics

Abstract:

In this article I explain why cognitive science (including some neuroscience)
matters for normative ethics. First, I describe the dual-process theory of moral judgment
and briefly summarize the evidence supporting it. Next I describe related experimental
research examining influences on intuitive moral judgment. I then describe two ways in
which research along these lines can have implications for ethics. I argue that a deeper
understanding of moral psychology favors certain forms of consequentialism over other
classes of normative moral theory. I close with some brief remarks concerning the bright
future of ethics as an interdisciplinary enterprise.

Here is an excerpt:

Likewise, it would be a cognitive miracle if we had reliably good moral instincts about
unfamiliar* moral problems. This suggests the following more general principle:
The No Cognitive Miracles Principle: When we are dealing with unfamiliar*
moral problems, we ought to rely less on automatic settings (automatic
emotional responses) and more on manual mode (conscious, controlled
reasoning), lest we bank on cognitive miracles.
This principle is powerful because it, when combined with empirical knowledge of
moral psychology, offers moral guidance while presupposing nothing about what is
morally good or bad. A corollary of the NCMP is that we should expect certain
pathological individuals—VMPFC patients? Psychopaths? Alexithymics? —to make 32
better decisions than healthy people in some cases. (This is why such individuals are no
embarrassment to the view I will defend in the next section.)

The author's copy is here.

Thursday, November 27, 2014

How Your Brain Decides Without You

In a world full of ambiguity, we see what we want to see.

By Tom Vanderbilt
Nautilus
Originally published on November 6, 2014

Here is an excerpt:

The structure of the brain, she notes, is such that there are many more intrinsic connections between neurons than there are connections that bring sensory information from the world. From that incomplete picture, she says, the brain is “filling in the details, making sense out of ambiguous sensory input.” The brain, she says, is an “inference generating organ.” She describes an increasingly well-supported working hypothesis called predictive coding, according to which perceptions are driven by your own brain and corrected by input from the world. There would otherwise simple be too much sensory input to take in. “It’s not efficient,” she says. “The brain has to find other ways to work.” So it constantly predicts. When “the sensory information that comes in does not match your prediction,” she says, “you either change your prediction—or you change the sensory information that you receive.”

Friday, November 14, 2014

Empathy: A motivated account

Jamil Zaki
Department of Psychology, Stanford University
IN PRESS at Psychological Bulletin

ABSTRACT

Empathy features a tension between automaticity and context dependency. On the one hand, people often take on each other’s states reflexively and outside of awareness. On the other hand, empathy exhibits deep context dependence, shifting with characteristics of empathizers and situations. These two characteristics of empathy can be reconciled by acknowledging the key role of motivation in driving people to avoid or approach engagement with others’ emotions. In particular, at least three motives—suffering, material costs, and interference with competition—drive people to avoid empathy, and at least three motives—positive affect, affiliation, and social desirability—drive them to approach empathy. Would-be empathizers carry out these motives through regulatory strategies including situation selection, attentional modulation, and appraisal, which alter the course of empathic episodes. Interdisciplinary evidence highlights the motivated nature of empathy, and a motivated model holds wide-ranging implications for basic theory, models of psychiatric illness, and intervention efforts to maximize empathy.

The entire article is here.

Friday, September 19, 2014

Using metacognitive cues to infer others’ thinking

André Mata and Tiago Almeida
Judgment and Decision Making 9.4 (Jul 2014): 349-359.

Abstract

Three studies tested whether people use cues about the way other people think--for example, whether others respond fast vs. slow--to infer what responses other people might give to reasoning problems. People who solve reasoning problems using deliberative thinking have better insight than intuitive problem-solvers into the responses that other people might give to the same problems. Presumably because deliberative responders think of intuitive responses before they think of deliberative responses, they are aware that others might respond intuitively, particularly in circumstances that hinder deliberative thinking (e.g., fast responding). Intuitive responders, on the other hand, are less aware of alternative responses to theirs, so they infer that other people respond as they do, regardless of the way others respond.

The entire article is here.

This article is important when contemplating ethical decision-making.

Sunday, June 1, 2014

The Ethics of Automated Cars

By Patrick Lin
Wired Magazine
Originally published May 6, 2014

Here is an except:

Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best.

The entire story is here.

Friday, May 23, 2014

Cognitive science and threats to free will

By Joshua Shepherd
Practical Ethics
Originally published on May 6, 2014

It is often asserted that emerging cognitive science – especially work in psychology (e.g., that associated with work on automaticity, along with work on the power of situations to drive behavior) and cognitive neuroscience (e.g., that associated with unconscious influences on decision-making) – threatens free will in some way or other. What is not always clear is how this work threatens free will. As a result, it is a matter of some controversy whether this work actually threatens free will, as opposed to simply appearing to threaten free will. And it is a matter of some controversy how big the purported threat might be. Could work in cognitive science convince us that there is no free will? Or simply that we have less free will? And if it is the latter, how much less, and how important is this for our practices of holding one another morally responsible for our behavior?

The entire article is here.

Tuesday, April 15, 2014

Automated ethics

When is it ethical to hand our decisions over to machines? And when is external automation a step too far?

by Tom Chatfield
Aeon Magazine
Originally published March 31, 2014

Here is an excerpt:

Automation, in this context, is a force pushing old principles towards breaking point. If I can build a car that will automatically avoid killing a bus full of children, albeit at great risk to its driver’s life, should any driver be given the option of disabling this setting? And why stop there: in a world that we can increasingly automate beyond our reaction times and instinctual reasoning, should we trust ourselves even to conduct an assessment in the first place?

Beyond the philosophical friction, this last question suggests another reason why many people find the trolley disturbing: because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us.

The entire article is here.

Saturday, February 1, 2014

Intuitive Prosociality

By Jamil Zaki and Jason P. Mitchell
Current Directions in Psychological Science 22(6) 466–470
DOI: 10.1177/0963721413492764

Abstract

Prosocial behavior is a central feature of human life and a major focus of research across the natural and social sciences. Most theoretical models of prosociality share a common assumption: Humans are instinctively selfish, and prosocial behavior requires exerting reflective control over these basic instincts. However, findings from several scientific disciplines have recently contradicted this view. Rather than requiring control over instinctive selfishness, prosocial behavior appears to stem from processes that are intuitive, reflexive, and even automatic. These observations suggest that our understanding of prosociality should be revised to include the possibility that, in many cases, prosocial behavior—instead of requiring active control over our impulses—represents an impulse of its own.

Click here for accessing the article, behind a paywall.

Friday, August 23, 2013

Empathy as a choice

By Jamil Zaki
Scientific American
July 29, 2013

Here is an excerpt:

Evidence from across the social and natural sciences suggests that we take on others’ facial expressions, postures, moods, and even patterns of brain activity.  This type of empathy is largely automatic.  For instance, people imitate others’ facial expressions after just a fraction of a second, often without realizing they’re doing so. Mood contagion likewise operates under the surface.  Therapists often report that, despite their best efforts, they take on patients’ moods, consistent with evidence from a number of studies.

(cut)

Together, these studies suggest that instead of automatically taking on others’ emotions, people make choices about whether and how much to engage in empathy.

The entire story is here.

Friday, May 4, 2012

Bounded Ethicality: The Perils of Loss Framing

By Mary C. Kern and Dolly Chugh
Psychological Science
(2009) Volume 20, Number 3, pp 378-384

Abstract

Ethical decision making is vulnerable to the forces of automaticity. People behave differently in the face of a potential loss versus a potential gain, even when the two situations are transparently identical. Across three experiments, decision makers engaged in more unethical behavior if a decision was presented in a loss frame than if the decision was presented in a gain frame. In Experiment 1, participants in the loss-frame condition were more likely to favor gathering ‘‘insider information’’ than were participants in the gain-frame condition. In Experiment 2, negotiators in the loss-frame condition lied more than negotiators in the gain-frame condition. In Experiment 3, the tendency to be less ethical in the loss-frame condition occurred under time pressure and was eliminated through the removal of time pressure.

(cut)

Framing

In the studies reported here, we explored the effect of automaticity on the cognitions and behaviors of decision makers in the moment of ethical choice. What are the roles of the decision maker’s cognitive framing of the situation and the decision maker’s available cognitive resources?  We turned to framing effects (Tversky & Kahneman, 1981) as the foundation of our inquiry.  The transformative effects of framing are well established (for reviews, see Camerer, 2000; Kuhberger, 1998). A framing effect occurs when transparently and objectively identical situations generate dramatically different decisions depending on whether the situations are presented, or perceived, as potential losses or gains (Tversky & Kahneman, 1981). Framing effects are integral to prospect theory (Kahneman & Tversky, 1979; Tversky & Kahneman, 1981), a model of choice that describes an ‘‘S-shaped value function’’ to illustrate the differences in how gains and losses, relative to a reference point, are valued. A critical feature of this curve is that it has a steeper slope in the loss domain than in the gain domain. As a result, people are loss averse; that is, they are willing to go to greater lengths to avoid a loss than to obtain a gain of a similar size (Kahneman, Knetsch, & Thaler, 1990; Tversky & Kahneman, 1991).

We considered the implications of framing effects for ethics.  When making decisions, individuals often choose from an array of possible responses, with some choices being more, or less, ethical than others. Given the previous work on framing effects, we reasoned that individuals who perceive a potential outcome as a loss will go to greater lengths, and engage in more unethical behavior, to avert that loss than will individuals who perceive a similarly sized gain. This logic formed the initial basis for the present research.