Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Intuition. Show all posts
Showing posts with label Intuition. Show all posts

Wednesday, February 1, 2017

Why It’s So Hard to Train Someone to Make an Ethical Decision

Eugene Soltes
Harvard Business Review
Originally posted January 11, 2017

Here is an excerpt:

The second factor distinguishing training exercises from real-life decision making is that training inevitably exposes different points of views and judgments. Although many organizations outwardly express a desire for a diversity of opinions, in practice those differing viewpoints are often stifled by the desire to agree or appease others. Even at the most senior levels of the organization, independent directors struggle to dissent. For instance, Dennis Kozlowski, the former CEO of Tyco who grew the firm from obscurity into a global conglomerate but later faced criminal charges for embezzlement, recalled the challenge of board members genuinely disagreeing and pushing back on him as CEO when the firm was performing well. “When the CEO is in the room, directors — even independent directors — tend to want to try to please him,” Kozlowski explained. “The board would give me anything I wanted. Anything.”

Finally, unlike in training, when a single decision might be given an hour of careful analysis, most actual decisions are made quickly and rely on intuition rather than careful, reflective reasoning. This can be especially problematic for moral decisions, which often rely on routine and intuitions that produce mindless judgements that don’t match up with how we’d desire to respond if we considered the decision with more time.

The article is here.

Editor's note: While I agree that it can be difficult to teach someone to make an ethical decision, maybe we can develop alternative ways to teach ethical decision-making. Ethics education requires attention to how personal values blend with work responsibilities, emotional reactions to ethical dilemmas, and biases and heuristics related to decision-making skills in general, and ethics in particular.  If an individual feels pressure to make a decision, then there are typically ways to slow down the process.  Finally, ethics education can include quality enhancement strategies, including redundant protections and consultation, that can bolster the opportunity for better outcomes.

Thursday, January 12, 2017

The Psychology of White-Collar Criminals

Eugene Soltes
The Atlantic
Originally posted December 14, 2016

Here is an excerpt:

Usually, a gut feeling that something will be harmful is enough of a deterrence. But when the harm is distant or abstract, this internal alarm doesn’t always go off. This absence of intuition about the harm creates a particular challenge for executives. Today, managerial decisions impact ever-greater numbers of people and the distance between executives and the people their decisions affect continues to grow. In fact, many of the people most harmed or helped by executives’ decisions are those they will never identify or meet. In this less intimate world, age-old intuitions are not always well suited to sense the kinds of potential harms that people can cause in the business world.

Reflecting on these limits to human intuition, I came to a conclusion that I found humbling. Most people like to think that they have the right values to make it through difficult times without falling prey to the same failures as the convicted executives I got to know. But those who believe they would face the same situations with their current values and viewpoints tend to underestimate the influence of the pressures, cultures, and norms that surround executive decision making. Perhaps a little humility is in order, given that people seem to have some difficulty predicting how they’d act in that environment. “What we all think is, ‘When the big moral challenge comes, I will rise to the occasion,’ [but] there’s not actually that many of us that will actually rise to the occasion,” as one former CFO put it. “I didn’t realize I would be a felon.”

The article is here.

Saturday, September 10, 2016

Rational and Emotional Sources of Moral Decision-Making: an Evolutionary-Developmental Account

Denton, K.K. & Krebs, D.L.
Evolutionary Psychological Science (2016). pp 1-14.

Abstract

Some scholars have contended that moral decision-making is primarily rational, mediated by controlled, deliberative, and reflective forms of moral reasoning. Others have contended that moral decision-making is primarily emotional, mediated by automatic, affective, and intuitive forms of decision-making. Evidence from several lines of research suggests that people make moral decisions in both of these ways. In this paper, we review psychological and neurological evidence supporting dual-process models of moral decision-making and discuss research that has attempted to identify triggers for rational-reflective and emotional-intuitive processes. We argue that attending to the ways in which brain mechanisms evolved and develop throughout the life span supplies a basis for explaining why people possess the capacity to engage in two forms of moral decision-making, as well as accounting for the attributes that define each type and predicting when the mental mechanisms that mediate each of them will be activated and when one will override the other. We close by acknowledging that neurological research on moral decision-making mechanisms is in its infancy and suggesting that future research should be directed at distinguishing among different types of emotional, intuitive, rational, and reflective processes; refining our knowledge of the brain mechanisms implicated in different forms of moral judgment; and investigating the ways in which these mechanisms interact to produce moral decisions.

The article is here.

Tuesday, August 16, 2016

Trust Your Gut or Think Carefully? Empathy Research

Ma-Kellams, C., & Lerner, J.
Journal of Personality and Social Psychology
Online First Publication, July 21, 2016.
http://dx.doi.org/10.1037/pspi0000063


Abstract:    

Cultivating successful personal and professional relationships requires the ability to accurately infer the feelings of others — i.e., to be empathically accurate. Some are better than others at this, which may be explained by mode of thought, among other factors. Specifically, it may be that empathically-accurate people tend to rely more on intuitive rather than systematic thought when perceiving others. Alternatively, it may be the reverse — that systematic thought increases accuracy. In order to determine which view receives empirical support, we conducted four studies examining relations between mode of thought (intuitive versus systematic) and empathic accuracy. Study 1 revealed a lay belief that empathic accuracy arises from intuitive modes of thought. Studies 2-4, each using executive-level professionals as participants, demonstrated that (contrary to lay beliefs) people who tend to rely on intuitive thinking also tend to exhibit lower empathic accuracy. This pattern held when participants inferred others’ emotional states based on (a) in-person face-to-face interactions with partners (Study 2) as well as on (b) pictures with limited facial cues (Study 3). Study 4 confirmed that the relationship is causal: experimentally inducing systematic (as opposed to intuitive) thought led to improved empathic accuracy. In sum, evidence regarding personal and social processes in these four samples of working professionals converges on the conclusion that — contrary to lay beliefs — empathic accuracy arises more from systematic thought than from gut intuition.

The article is here.

Editor's Note: This article has profound implications for psychotherapy.

Tuesday, April 26, 2016

Inference of Trustworthiness from Intuitive Moral Judgments

Jim A. C. Everett, David A. Pizarro, M. J. Crockett.
Journal of Experimental Psychology: General, 2016
DOI: 10.1037/xge0000165

Abstract

Moral judgments play a critical role in motivating and enforcing human cooperation. Research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across five studies we show that people who make characteristically deontological judgments (as opposed to judgments that align with other ethical traditions) are preferred as social partners, perceived as more moral and trustworthy, and trusted more in economic games. These findings provide empirical support for a partner choice account for why intuitive moral judgments often align with deontological theories.

The article can be downloaded here.

Monday, February 15, 2016

When Deliberation Isn’t Smart

By Adam Bear and David Rand
Evonomics
Originally published January 25, 2016

Cooperation is essential for successful organizations. But cooperating often requires people to put others’ welfare ahead of their own. In this post, we discuss recent research on cooperation that applies the “Thinking, fast and slow” logic of intuition versus deliberation. We explain why people sometimes (but not always) cooperate in situations where it’s not in their self-interest to do so, and show how properly designed policies can build “habits of virtue” that create a culture of cooperation. TL;DR summary: intuition favors behaviors that are typically optimal, so institutions that make cooperation typically advantageous lead people to adopt cooperation as their intuitive default; this default then “spills over” into settings where it’s not actually individually advantageous to cooperate.

Life is full of opportunities to make personal sacrifices on behalf others, and we often rise to the occasion. We do favors for co-workers and friends, give money to charity, donate blood, and engage in a host of other cooperative endeavors. Sometimes, these nice deeds are reciprocated (like when we help out a friend, and she helps us with something in return). Other times, however, we pay a cost and get little in return (like when we give money to a homeless person whom we’ll never encounter again).

Although you might not realize it, nowhere is the importance of cooperation more apparent than in the workplace. If your boss is watching you, you’d probably be wise to be a team player and cooperate with your co-workers, since doing so will enhance your reputation and might even get you a promotion down the road. In other instances, though, you might get no recognition from, say, helping out a fellow employee who needs assistance meeting a deadline, or who calls out sick.

The article is here.

Thursday, January 21, 2016

Intuition, deliberation, and the evolution of cooperation

Adam Bear and David G. Rand
PNAS 2016 : 1517780113v1-201517780.

Abstract

Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

The article is here.

Saturday, January 9, 2016

Moral judgment as information processing: an integrative review

Steve Guglielmo
Front Psychol. 2015; 6: 1637.
Published online 2015 Oct 30. doi:  10.3389/fpsyg.2015.01637

Abstract

How do humans make moral judgments about others’ behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment.

The entire article is here.

Monday, December 14, 2015

Professional Intuition Is Under Assault, Wachter Says

By Marcia Frellick
Medscape.com
Originally published November 24, 2015

Profession intuition — the gut feeling doctors get with experience and instinct that something just isn't right — is under assault, Robert Wachter, MD, professor of clinical medicine at the University of California, San Francisco, told the audience at TEDMED 2015.

"It's suspicious, it's soft, it's squishy," said Dr Wachter, the physician who, along with Lee Goldman, MD, coined the word "hospitalist" in 1996 (N Engl J Med. 1996;335:514-517).

"There's not an algorithm for it, it's not evidence-based," he explained. And "it's antidemocratic, it's paternalistic."

The entire article is here.

Sunday, July 12, 2015

Please, Corporations, Experiment on Us

By Michelle N. Meyer and Christopher Chabris
The New York Times - Sunday Review
Originally posted June 19, 2015

Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent?

The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise.

But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.

The entire article is here.

Tuesday, March 24, 2015

How stress influences our morality

By Lucius Caviola and Nadira Faulmüller
Academia.edu

Abstract

Several studies show that stress can influence moral judgment and behavior. In personal moral dilemmas—scenarios where someone has to be harmed by physical contact in order to save several others—participants under stress tend to make more deontological judgments than non-stressed participants, i.e. they agree less with harming someone for the greater good. Other studies demonstrate that stress can increase pro-social behavior for in-group members but decrease it for out-group members. The dual-process theory of moral judgment in combination with an evolutionary perspective on emotional reactions seems to explain these results: stress might inhibit controlled reasoning and trigger people’s automatic emotional intuitions. In other words, when it comes to morality, stress seems to make us prone to follow our gut reactions instead of our elaborate reasoning.

Tuesday, February 24, 2015

The Importance of Moral Construal

Moral versus Non-Moral Construal Elicits Faster, More Extreme, Universal Evaluations of the Same Actions

By Jay J. Van Bavel, Dominic J. Packer, Ingrid J. Haas, and William A. Cunningham
PLoS ONE 7(11): e48693. doi:10.1371/journal.pone.0048693

Abstract

Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral versus non-moral construal. We had participants make moral evaluations (rating whether actions were morally good or bad) or non-moral evaluations (rating whether actions were pragmatically or hedonically good or bad) of a wide variety of actions. As predicted, moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions—the belief that absolutely nobody or everybody should engage in an action—than non-moral (pragmatic or hedonic) evaluations of the same actions. Further, we show that people are capable of flexibly shifting from moral to non-moral evaluations on a trial-by-trial basis. Taken together, these experiments provide evidence that moral versus non-moral construal has an important influence on evaluation and suggests that effects of construal are highly flexible. We discuss the implications of these experiments for models of moral judgment and decision-making.

The entire article is here.

Sunday, August 10, 2014

How to Be Good

An Oxford philosopher thinks he can distill all morality into a formula. Is he right?

By Larissa MacFarqhar
The New Yorker
Originally published September 5, 2011

Here are two excerpts:

Parfit is thought by many to be the most original moral philosopher in the English-speaking world. He has written two books, both of which have been called the most important works to be written in the field in more than a century—since 1874, when Henry Sidgwick’s “The Method of Ethics,” the apogee of classical utilitarianism, was published. Parfit’s first book, “Reasons and Persons,” was published in 1984, when he was forty-one, and caused a sensation. The book was dense with science-fictional thought experiments, all urging a shift toward a more impersonal, non-physical, and selfless view of human life.

(cut)

Parfit believes that there are true answers to moral questions, just as there are to mathematical ones. Humans can perceive these truths, through a combination of intuition and critical reasoning, but they remain true whether humans perceive them or not. He believes that there is nothing more urgent for him to do in his brief time on earth than discover what these truths are and persuade others of their reality. He believes that without moral truth the world would be a bleak place in which nothing mattered. This thought horrifies him.

The entire article is here.

Sunday, March 23, 2014

Experimental Approaches to Free Will: Knobe and Nahmias

Joshuan Knobe and Eddy Nahmias

Knobe and Nahmias begin with an overview of the early history and aims of experimental philosophy. Then they discuss experiments on the contrast between bypassing and throughpassing intuitions about free will (8:57); Nahmias’s “theory lite view,” according to which ordinary people have no strong views about the relation between mind and brain (17:34); whether the folk have a causal or an interventionist view of agency (24:17); the effect of descriptions of determinism on folk intuitions (32:52); and Nahmias’s work on “willusionism,” inspired by his critical view of certain popularized versions of free-will skepticism (41:47). Finally, Knobe and Nahmias consider future results that could resolve some of their disagreements (48:49) and forecast the next big steps in experimental philosophy of free will (57:00).


Sunday, March 16, 2014

The Failure of Social and Moral Intuitions

Edge Videos
HeadCon '13: Part IX
David Pizarro

Today I want to talk a little about our social and moral intuitions and I want to present a case that they're rapidly failing, more so than ever. Let me start with an example. Recently, I collaborated with economist Rob Frank, roboticist Cynthia Breazeal, and social psychologist David DeSteno. The experiment that we did was interested in looking at how we detect trustworthiness in others.

We had people interact—strangers interact in the lab—and we filmed them, and we got the cues that seemed to indicate that somebody's going to be either more cooperative or less cooperative. But the fun part of this study was that for the second part we got those cues and we programmed a robot—Nexi the robot, from the lab of Cynthia Breazeal at MIT—to emulate, in one condition, those non-verbal gestures. So what I'm talking about today is not about the results of that study, but rather what was interesting about looking at people interacting with the robot.



The entire page is here.

Saturday, February 22, 2014

Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism

By Jesse Graham, Jonathan Haidt, S. Koleva, M. Motyl,  R. Iyera, S. P. Wojcikd, & P. H. Ditto
in press, Advances in Experimental Social Psychology

Abstract: 

Where does morality come from? Why are moral judgments often so similar across cultures, yet sometimes so variable? Is morality one thing, or many? Moral Foundations Theory (MFT) was created to answer these questions. In this chapter we describe the origins, assumptions, and current conceptualization of the theory, and detail the empirical findings that MFT has made possible, both within social psychology and beyond. Looking toward the future, we embrace several critiques of the theory, and specify five criteria for determining what should be considered a foundation of human morality. Finally, we suggest a variety of future directions for MFT and for moral psychology. 

Here is an excerpt:

But what if, in some cultures, even the most advanced moral thinkers value groups, institutions, traditions, and gods? What should we say about local rules for how to be a good group member, or how to worship? If these rules are not closely linked to concerns about justice or care, then should we distinguish them from true moral rules, as Turiel did when he labeled such rules as “social conventions?” Shweder (1990) argued that the cognitive-developmental tradition was studying only a subset of moral concerns, the ones that are most highly elaborated in secular Western societies. Shweder argued for a much more extensive form of pluralism based on his research in Bhubaneswar, India (Shweder, Much, Mahapatra, & Park, 1997). He proposed that around the world, people talk in one or more of three moral languages: the ethic of autonomy (relying on concepts such as harm, rights, and justice, which protect autonomous individuals), the ethic of community (relying on concepts such as duty, respect, and loyalty, which preserve institutions and social order), and the ethic of divinity (relying on concepts such as purity, sanctity, and sin, which protect the divinity inherent in each person against the degradation of hedonistic selfishness.) 

Saturday, February 1, 2014

Intuitive Prosociality

By Jamil Zaki and Jason P. Mitchell
Current Directions in Psychological Science 22(6) 466–470
DOI: 10.1177/0963721413492764

Abstract

Prosocial behavior is a central feature of human life and a major focus of research across the natural and social sciences. Most theoretical models of prosociality share a common assumption: Humans are instinctively selfish, and prosocial behavior requires exerting reflective control over these basic instincts. However, findings from several scientific disciplines have recently contradicted this view. Rather than requiring control over instinctive selfishness, prosocial behavior appears to stem from processes that are intuitive, reflexive, and even automatic. These observations suggest that our understanding of prosociality should be revised to include the possibility that, in many cases, prosocial behavior—instead of requiring active control over our impulses—represents an impulse of its own.

Click here for accessing the article, behind a paywall.

Thursday, January 16, 2014

The Tragedy of Common-Sense Morality

Evolution didn’t equip us for modern judgments.

By Tiffany O'Callaghan
The New Scientist
Originally published December 14, 2013

Our instincts don't always serve us well. Moral psychologist Joshua Greene explains why, in the modern world, we need to figure out when to put our sense of right and wrong in manual mode. His new book is Moral Tribe: Emotion, Reason, and the Gap Between Us and Them.

Tiffany O’Callaghan: You say morality is more than it evolved to be. What do you mean?

Joshua Greene: Morality is essentially a suite of psychological mechanisms that enable us to cooperate. But, biologically at least, we only evolved to cooperate in a tribal way. Individuals who were more moral—more cooperative with those around them—could outcompete others who were not. However, we have the capacity to take a step back from this and ask what a more global morality would look like. Why are the lives of people on the other side of the world worth any less than those in my immediate community? Going through that reasoning process can allow our moral thinking to do something it never evolved to.

TO: So we need to be able to switch from intuitive morality to more considered responses? When should we use which system?

JG: When it’s a matter of me versus us, my interests versus those of others, our instincts do pretty well. They don't do as well when it’s us versus them, my group’s interests and values versus another group’s. Our moral intuitions didn’t evolve to solve that problem in an even-handed way. When groups disagree about the right thing to do, we need to slow down and shift into manual mode.

The entire article is here.

Tuesday, December 17, 2013

The Deep roots of Our Political Divide

This Is Interesting Podcast
Originally aired December 4, 2013

Matt Miller interviews Jonathan Haidt on politics, perspective, morality, and justice.



The entire podcast is here.

Thursday, October 4, 2012

Liberating Reason From the Passions: Overriding Intuitionist Moral Judgments Through Emotion Reappraisal


Matthew Feinberg, Robb Willer, Olga Antonenko and Oliver P. John
Psychological Science, 2012; 23 (7): 788 DOI: 10.1177/0956797611434747

Abstract

A classic problem in moral psychology concerns whether and when moral judgments are driven by intuition versus deliberate reasoning. In this investigation, we explored the role of reappraisal, an emotion-regulation strategy that involves construing an emotion-eliciting situation in a way that diminishes the intensity of the emotional experience. We hypothesized that although emotional reactions evoke initial moral intuitions, reappraisal weakens the influence of these intuitions, leading to more deliberative moral judgments. Three studies of moral judgments in emotionally evocative, disgust-eliciting moral dilemmas supported our hypothesis. A greater tendency to reappraise was related to fewer intuition-based judgments (Study 1). Content analysis of open-ended descriptions of moral-reasoning processes revealed that reappraisal was associated with longer time spent in deliberation and with fewer intuitionist moral judgments (Study 2). Finally, in comparison with participants who simply watched an emotion-inducing film, participants who had been instructed to reappraise their reactions while watching the film subsequently reported less intense emotional reactions to moral dilemmas, and these dampened reactions led, in turn, to fewer intuitionist moral judgments (Study 3).