Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Heuristics. Show all posts
Showing posts with label Heuristics. Show all posts

Sunday, February 19, 2017

Most People Consider Themselves to Be Morally Superior

By Cindi May
Scientific American
Originally published on January 31, 2017

Here are two excerpts:

This self-enhancement effect is most profound for moral characteristics. While we generally cast ourselves in a positive light relative to our peers, above all else we believe that we are more just, more trustworthy, more moral than others. This self-righteousness can be destructive because it reduces our willingness to cooperate or compromise, creates distance between ourselves and others, and can lead to intolerance or even violence. Feelings of moral superiority may play a role in political discord, social conflict, and even terrorism.

(cut)

So we believe ourselves to be more moral than others, and we make these judgments irrationally. What are the consequences? On the plus side, feelings of moral superiority could, in theory, protect our well-being. For example, there is danger in mistakenly believing that people are more trustworthy or loyal than they really are, and approaching others with moral skepticism may reduce the likelihood that we fall prey to a liar or a cheat. On the other hand, self-enhanced moral superiority could erode our own ethical behavior. Evidence from related studies suggests that self-perceptions of morality may “license” future immoral actions.

The article is here.

Wednesday, January 11, 2017

The Empathy Trap

By Peter Singer
The Project Syndicate
Originally published December 12, 2016

Here is an excerpt:

“One death is tragedy; a million is a statistic.” If empathy makes us too favorable to individuals, large numbers numb the feelings we ought to have. The Oregon-based nonprofit Decision Research has recently established a website, ArithmeticofCompassion.org, aimed at enhancing our ability to communicate information about large-scale problems without giving rise to “numerical numbness.” In an age in which vivid personal stories go viral and influence public policy, it’s hard to think of anything more important than helping everyone to see the larger picture.

To be against empathy is not to be against compassion. In one of the most interesting sections of Against Empathy, Bloom describes how he learned about differences between empathy and compassion from Matthieu Ricard, the Buddhist monk sometimes described as “the happiest man on earth.” When the neuroscientist Tania Singer (no relation to me) asked Ricard to engage in “compassion meditation” while his brain was being scanned, she was surprised to see no activity in the areas of his brain normally active when people empathize with the pain of others. Ricard could, on request, empathize with others’ pain, but he found it unpleasant and draining; by contrast, he described compassion meditation as “a warm positive state associated with a strong pro-social motivation.”

The article is here.

Tuesday, December 20, 2016

Glitches: A Conversation With Laurie R. Santos

Edge.org
Originally posted November 27, 2016

Here is an excerpt of the article/video:

Scholars like Kahneman, Thaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.                                

That tells us something about how those biases work. That tells us those are old biases. They're not built for current economic markets. They're not built for systems dealing with money. There's something fundamental about the way we make sense of choices in the world, and if you're going to attack them and try to override them, you have to do it in a way that's honest about the fact that those biases are going to be way too deep.                                

If you are a Bob Cialdini and you're interested in the extent to which we get messed up by the information we hear that other people are doing, and you learn that it's just us—chimpanzees don't fall prey to that—you learn something interesting about how those biases work. This is something that we have under the hood that's operating off mechanisms that are not old, which we might be able to harness in a very different way than we would have for solving something like loss aversion.                                

What I've found is that when the Kahnemans and the Cialdinis of the world hear about the animal work, both in cases where animals are similar to humans and in cases where animals are different, they get pretty excited. They get excited because it's telling them something, not because they care about capuchins or dogs. They get excited because they care about humans, and the animal work has allowed us to get some insight into how humans tick, particularly when it comes to their biases.

The text/video is here.

Saturday, December 17, 2016

Free Will and Autonomous Medical DecisionMaking

Matthew A. Butkus
Neuroethics 3 (1): 75–119.

Abstract

Modern medical ethics makes a series of assumptions about how patients and their care providers make decisions about forgoing treatment. These assumptions are based on a model of thought and cognition that does not reflect actual cognition—it has substituted an ideal moral agent for a practical one. Instead of a purely rational moral agent, current psychology and neuroscience have shown that decision-making reflects a number of different factors that must be considered when conceptualizing autonomy. Multiple classical and contemporary discussions of autonomy and decision-making are considered and synthesized into a model of cognitive autonomy. Four categories of autonomy criteria are proposed to reflect current research in cognitive psychology and common clinical issues.

The article is here.

Friday, December 16, 2016

Why moral companies do immoral things

Michael Skapinker
Financial Times
Originally published November 23, 2016

Here is an excerpt:

But I wondered about the “better than average” research cited above. Could the illusion of moral superiority apply to organisations as well as individuals? And could companies believe they were so superior morally that the occasional lapse into immorality did not matter much? The Royal Holloway researchers said they had recently conducted experiments examining just these issues and were preparing to publish the results. They had found that political groups with a sense of moral superiority felt justified in behaving aggressively towards opponents. In experiments, this meant denying them a monetary benefit.

“It isn’t difficult to imagine a similar scenario arising in a competitive organisational context. To the extent that employees may perceive their organisation to be morally superior to other organisations, they might feel licensed to ‘cut corners’ or behave somewhat unethically — for example, to give their organisation a competitive edge.

“These behaviours may be perceived as justified … or even ethical, insofar as they promote the goals of their morally superior organisation,” they told me.

The article is here.

Friday, November 18, 2016

Bayesian Brains without Probabilities

Adam N. Sanborn & Nick Chater
Trends in Cognitive Science
Published Online: October 26, 2016

Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy.

The article is here.

Thursday, September 29, 2016

How Curiosity Can Protect the Mind from Bias

By Tom Stafford
bbc.com
Originally published 8 September 2016

Here is an excerpt:

The team confirmed this using an experiment which gave participants a choice of science stories, either in line with their existing beliefs, or surprising to them. Those participants who were high in scientific curiosity defied the predictions and selected stories which contradicted their existing beliefs – this held true whether they were liberal or conservative.

And, in case you are wondering, the results hold for issues in which political liberalism is associated with the anti-science beliefs, such as attitudes to GMO or vaccinations.

So, curiosity might just save us from using science to confirm our identity as members of a political tribe. It also shows that to promote a greater understanding of public issues, it is as important for educators to try and convey their excitement about science and the pleasures of finding out stuff, as it is to teach people some basic curriculum of facts.

The article is here.

Thursday, August 11, 2016

Why Do People Tend to Infer “Ought” From “Is”? The Role of Biases in Explanation

Christina M. Tworek and Andrei Cimpian
Psychological Science July 8, 2016

Abstract

People tend to judge what is typical as also good and appropriate—as what ought to be. What accounts for the prevalence of these judgments, given that their validity is at best uncertain? We hypothesized that the tendency to reason from “is” to “ought” is due in part to a systematic bias in people’s (nonmoral) explanations, whereby regularities (e.g., giving roses on Valentine’s Day) are explained predominantly via inherent or intrinsic facts (e.g., roses are beautiful). In turn, these inherence-biased explanations lead to value-laden downstream conclusions (e.g., it is good
to give roses). Consistent with this proposal, results from five studies (N = 629 children and adults) suggested that, from an early age, the bias toward inherence in explanations fosters inferences that imbue observed reality with value.  Given that explanations fundamentally determine how people understand the world, the bias toward inherence in these judgments is likely to exert substantial influence over sociomoral understanding.

The article is here.

Monday, May 9, 2016

How Animals Think

By Alison Gopnik
The Atlantic
May 2016

Here is an excerpt:

Psychologists often assume that there is a special cognitive ability—a psychological secret sauce—that makes humans different from other animals. The list of candidates is long: tool use, cultural transmission, the ability to imagine the future or to understand other minds, and so on. But every one of these abilities shows up in at least some other species in at least some form. De Waal points out various examples, and there are many more. New Caledonian crows make elaborate tools, shaping branches into pointed, barbed termite-extraction devices. A few Japanese macaques learned to wash sweet potatoes and even to dip them in the sea to make them more salty, and passed that technique on to subsequent generations. Western scrub jays “cache”—they hide food for later use—and studies have shown that they anticipate what they will need in the future, rather than acting on what they need now.

From an evolutionary perspective, it makes sense that these human abilities also appear in other species. After all, the whole point of natural selection is that small variations among existing organisms can eventually give rise to new species. Our hands and hips and those of our primate relatives gradually diverged from the hands and hips of common ancestors. It’s not that we miraculously grew hands and hips and other animals didn’t. So why would we alone possess some distinctive cognitive skill that no other species has in any form?

The article is here.

Thursday, May 5, 2016

Why Believing in Luck Makes You a Better Person

By Jesse Singal
New York Magazine
Originally posted April 14, 2016

Here is an excerpt:

In an unexpected twist, we may even find that recognizing our luck increases our good fortune. Social scientists have been studying gratitude intensively for almost two decades, and have found that it produces a remarkable array of physical, psychological, and social changes. Robert Emmons of the University of California at Davis and Michael McCullough of the University of Miami have been among the most prolific contributors to this effort. In one of their collaborations, they asked a first group of people to keep diaries in which they noted things that had made them feel grateful, a second group to note things that had made them feel irritated, and a third group to simply record events. After 10 weeks, the researchers reported dramatic changes in those who had noted their feelings of gratitude. The newly grateful had less frequent and less severe aches and pains and improved sleep quality. They reported greater happiness and alertness. They described themselves as more outgoing and compassionate, and less likely to feel lonely and isolated.

The article is here.

We are zombies rewriting our mental history to feel in control

By Matthew Hutson
Daily News
Originally posted April 15 2016

Here is an excerpt:

Another possibility, one Bear prefers, is that we misperceive the order of events in the moment due to inherent limitations in perceptual processing. To put it another way, our brain isn’t trying to trick us into believing we are in control – just that it struggles to process a rapid sequence of events in the correct order.

Such findings may also imply that many of the choices we believe we make only appear to be signs of free will after the fact.

Everyday examples of this “postdictive illusion of choice” abound. You only think that you consciously decided to scratch an itch, make a deft football play, or blurt out an insult, when really you’re just taking credit for reflexive actions.

The article is here.

Thursday, April 21, 2016

The Science of Choosing Wisely — Overcoming the Therapeutic Illusion

David Casarett
New England Journal of Medicine 2016; 374:1203-1205
March 31, 2016
DOI: 10.1056/NEJMp1516803

Here are two excerpts:

The success of such efforts, however, may be limited by the tendency of human beings to overestimate the effects of their actions. Psychologists call this phenomenon, which is based on our tendency to infer causality where none exists, the “illusion of control.” In medicine, it may be called the “therapeutic illusion” (a label first applied in 1978 to “the unjustified enthusiasm for treatment on the part of both patients and doctors”). When physicians believe that their actions or tools are more effective than they actually are, the results can be unnecessary and costly care. Therefore, I think that efforts to promote more rational decision making will need to address this illusion directly.

(cut)

The outcome of virtually all medical decisions is at least partly outside the physician’s control, and random chance can encourage physicians to embrace mistaken beliefs about causality. For instance, joint lavage is overused for relief of osteoarthritis-related knee pain, despite a recommendation against it from the American Academy of Orthopedic Surgery. Knee pain tends to wax and wane, so many patients report improvement in symptoms after lavage, and it’s natural to conclude that the intervention was effective.

The article is here.

Monday, April 18, 2016

The Benjamin Franklin Effect

David McRaney
You Are Not So Smart Blog: A Celebration of Self Delusion
October 5, 2011

The Misconception: You do nice things for the people you like and bad things to the people you hate.

The Truth: You grow to like people for whom you do nice things and hate people you harm.

(cut)

Sometimes you can’t find a logical, moral or socially acceptable explanation for your actions. Sometimes your behavior runs counter to the expectations of your culture, your social group, your family or even the person you believe yourself to be. In those moments you ask, “Why did I do that?” and if the answer damages your self-esteem, a justification is required. You feel like a bag of sand has ruptured in your head, and you want relief. You can see the proof in an MRI scan of someone presented with political opinions which conflict with their own. The brain scans of a person shown statements which oppose their political stance show the highest areas of the cortex, the portions responsible for providing rational thought, get less blood until another statement is presented which confirms their beliefs. Your brain literally begins to shut down when you feel your ideology is threatened. Try it yourself. Watch a pundit you hate for 15 minutes. Resist the urge to change the channel. Don’t complain to the person next to you. Don’t get online and rant. Try and let it go. You will find this is excruciatingly difficult.

The blog post is here.

Note: How do you perceive complex patients or those who do not respond well to psychotherapy?

Monday, April 11, 2016

The Sunk Cost Fallacy

David McRaney
You are Not So Smart Blog: The Celebration of Self-Delusion
Originally published March 25, 2011 (and still relevant)

The Misconception: You make rational decisions based on the future value of objects, investments and experiences.

The Truth: Your decisions are tainted by the emotional investments you accumulate, and the more you invest in something the harder it becomes to abandon it.

The blog post is here.

Note: This heuristic may be one reason psychologists hang onto patients longer than required.

Sunday, February 14, 2016

Why people fall for pseudoscience

By Sian Townson
The Guardian
Originally published January 26, 2016

Pseudoscience is everywhere – on the back of your shampoo bottle, on the ads that pop up in your Facebook feed, and most of all in the Daily Mail. Bold statements in multi-syllabic scientific jargon give the false impression that they’re supported by laboratory research and hard facts.

Magnetic wristbands improve your sporting performance, carbs make you fat, and just about everything gives you cancer.

Of course, we scientists accept that sometimes people believe things we don’t agree with. That’s fine. Science is full of people who disagree with one another . If we all thought exactly the same way, we could retire and call the status quo truth.

But when people think snake oil is backed up by science, we have to challenge that. So why is it so hard?

The article is here.

Friday, December 18, 2015

The Centrality of Belief and Reflection in Knobe Effect Cases: A Unified Account of the Data

By Mark Alfano, James R. Beebe, and Brian Robinson
The Monist
April 2012

Abstract

Recent work in experimental philosophy has shown that people are more likely to attribute intentionality, knowledge, and other psychological properties to someone who causes a bad side-effect than to someone who causes a good one. We argue that all of these asymmetries can be explained in terms of a single underlying asymmetry involving belief attribution because the  belief that one’s action would result in a certain side-effect is a necessary component of each of the psychological attitudes in question. We argue further that this belief-attribution asymmetry is rational because it mirrors a belief-formation asymmetry and that the belief-formation asymmetry is also rational because it is more useful to form some beliefs than others.

Tuesday, May 26, 2015

The Most Depressing Discovery About the Brain, Ever

Say goodnight to the dream that education, journalism, scientific evidence, or reason can provide the tools that people need in order to make good decisions.

By Marty Kaplan
Alternet.org
Originally posted September 16, 2013

Yale law school professor Dan Kahan’s new research paper is called “Motivated Numeracy and Enlightened Self-Government,” but for me a better title is the headline on science writer Chris Mooney’s piece about it in Grist:  “Science Confirms: Politics Wrecks Your Ability to Do Math.”

Kahan conducted some ingenious experiments about the impact of political passion on people’s ability to think clearly.  His conclusion, in Mooney’s words: partisanship “can even undermine our very basic reasoning skills…. [People] who are otherwise very good at math may totally flunk a problem that they would otherwise probably be able to solve, simply because giving the right answer goes against their political beliefs.”

In other words, say goodnight to the dream that education, journalism, scientific evidence, media literacy or reason can provide the tools and information that people need in order to make good decisions.  It turns out that in the public realm, a lack of information isn’t the real problem.  The hurdle is how our minds work, no matter how smart we think we are.  We want to believe we’re rational, but reason turns out to be the ex post facto way we rationalize what our emotions already want to believe.

The entire article is here.

Sunday, May 24, 2015

The Stubborn System of Moral Responsibility

Bruce N. Waller, The Stubborn System of Moral Responsibility, MIT Press, 2015, 294pp.
ISBN 9780262028165.

Reviewed by Seth Shabo, University of Delaware

This book is a spirited and engaging broadside against ordinary belief in moral responsibility. Specifically, Bruce Waller challenges the entrenched belief that people bear the kind of moral responsibility for their conduct that would justify punishing them on the grounds that they deserve it. What needs explaining, in Waller's view, is why so many philosophers continue to defend this orthodoxy in the face of such powerful counterevidence. His proposed explanation encompasses a range of psychological and social factors that powerfully reinforce this belief. These include the animal impulse to strike back when harmed, an impulse that often inhibits deeper reflection into the causes of the offender's conduct; the desire to justify expressions of this strike-back impulse; the broader belief in a just universe in which wrongdoers have retribution coming to them; a heuristic tendency to substitute simpler problems for hard ones (in this case, the question of how we can correctly attribute bad qualities to people with the intractable problem of how people can truly deserve punishment); and the ascendancy of an individualistic, neoliberal political culture that downplays the role of societal conditions in shaping how people turn out.

The entire book review is here.

Monday, May 18, 2015

Why Many Doctors Don't Follow 'Best Practices'

By Anders Kelto
NPR News - All Things Considered
Originally published April 22, 2015

Here is an excerpt:

Imagine, for example, that a healthy, 40-year-old woman walks into your office and asks about a mammogram.

"If that woman were to develop breast cancer or to have breast cancer, you can imagine what might happen to you if you didn't order the test," Wu says. "Maybe you'd get sued."

Doctors often hear stories like this, he says, and that can affect their judgment.

"Emotion and recent events do influence our decision-making," he says. "We are not absolutely rational, decision-making machines."

The entire article is here.

Saturday, May 2, 2015

Free Will and Autonomous Medical DecisionMaking

Butkus, Matthew A. 2015. “Free Will and Autonomous Medical Decision-Making.”
Journal of Cognition and Neuroethics 3 (1): 75–119.

Abstract

Modern medical ethics makes a series of assumptions about how patients and their care providers make decisions about forgoing treatment. These assumptions are based on a model of thought and cognition that does not reflect actual cognition—it has substituted an ideal moral agent for a practical one. Instead of a purely rational moral agent, current psychology and neuroscience have shown that decision-making reflects a number of different factors that must be considered when conceptualizing autonomy. Multiple classical and contemporary discussions of autonomy and decision-making are considered and synthesized into a model of cognitive autonomy. Four categories of autonomy criteria are proposed to reflect current research in cognitive psychology and common clinical issues.

The entire article is here.