Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Perception. Show all posts
Showing posts with label Perception. Show all posts

Tuesday, March 8, 2016

When are Do-Gooders Treated Badly? Legitimate Power, Role Expectations, and Reactions to Moral Objection in Organizations.

Wellman, Ned; Mayer, David M.; Ong, Madeline; DeRue, D. Scott
Journal of Applied Psychology, Feb 15 , 2016

Abstract

Organization members who engage in “moral objection” by taking a principled stand against ethically questionable activities help to prevent such activities from persisting. Unfortunately, research suggests that they also may be perceived as less warm (i.e., pleasant, nice) than members who comply with ethically questionable procedures. In this article, we draw on role theory to explore how legitimate power influences observers’ responses to moral objection. We argue that individuals expect those high in legitimate power to engage in moral objection, but expect those low in legitimate power to comply with ethically questionable practices. We further propose that these contrasting role expectations influence the extent to which moral objectors are perceived as warm and subjected to social sanctions (i.e., insults, pressure, unfriendly behavior). We test our predictions with 3 experiments. Study 1, which draws on participants’ prior workplace experiences, supports the first section of our mediated moderation model in which the negative association between an actor’s moral objection (vs. compliance) and observers’ warmth perceptions is weaker when the actor is high rather than low in legitimate power and this effect is mediated by observers’ met role expectations. Study 2, an online experiment featuring a biased hiring task, reveals that the warmth perceptions fostered by the Behavior × Legitimate Power interaction influence observers’ social sanctioning intentions. Finally, Study 3, a laboratory experiment which exposes participants to unethical behavior in a virtual team task, replicates Study 2’s findings and extends the results to actual as well as intended social sanctions.

The article is here.

Tuesday, February 23, 2016

American attitudes toward nudges

Janice Y. Jung and Barbara A. Mellers
Judgment and Decision Making
Vol. 11, No. 1, January 2016, pp. 62-74

To successfully select and implement nudges, policy makers need a psychological understanding of who opposes nudges, how they are perceived, and when alternative methods (e.g., forced choice) might work better. Using two representative samples, we examined four factors that influence U.S. attitudes toward nudges – types of nudges, individual dispositions, nudge perceptions, and nudge frames. Most nudges were supported, although opt-out defaults for organ donations were opposed in both samples. “System 1” nudges (e.g., defaults and sequential orderings) were viewed less favorably than “System 2” nudges (e.g., educational opportunities or reminders). System 1 nudges were perceived as more autonomy threatening, whereas System 2 nudges were viewed as more effective for better decision making and more necessary for changing behavior. People with greater empathetic concern tended to support both types of nudges and viewed them as the “right” kind of goals to have. Individualists opposed both types of nudges, and conservatives tended to oppose both types. Reactant people and those with a strong desire for control opposed System 1 nudges. To see whether framing could influence attitudes, we varied the description of the nudge in terms of the target (Personal vs. Societal) and the reference point for the nudge (Costs vs. Benefits). Empathetic people were more supportive when framing highlighted societal costs or benefits, and reactant people were more opposed to nudges when frames highlighted the personal costs of rejection.

The article is here.

Thursday, January 28, 2016

Binocularity in bioethics—and beyond

Earp, B. D., & Hauskeller, M. (in press). Binocularity in bioethics—and
beyond. American Journal of Bioethics, in press.

Abstract

Parens (2015) defends a habit of thinking he calls “binocularity,” which involves switching between analytical lenses (much as one must switch between seeing the duck vs. the rabbit in Wittgenstein’s famous example). Applying this habit of thought to a range of debates in contemporary bioethics, Parens urges us to acknowledge the ways in which our personal intuitions and biases shape our thinking about contentious moral issues. In this review of Parens’s latest book, we reflect on our own position as participants in the so-called “enhancement” debates, where a binocular approach could be especially useful. In particular, we consider the case of “love drugs,” a subject on which we have sometimes reached very different conclusions. We finish with an analogy to William James’s (1907) distinction between “tenderminded” rationalists and “tough-minded” empiricists, and draw some general lessons for improving academic discourse.

The paper is here.

Tuesday, January 26, 2016

A Therapist’s Fib

By Jonathan Schiff
The New York Times
Originally published January

An old New Yorker cartoon features a man suspended upside down from the ceiling, like a stalactite. A psychiatrist explains to the wife that the first objective is to convince the man that he is a stalagmite.

Funny — but it invites a serious question: Is it ever justified for a clinician to help a client to believe in a fiction?

The brief article is here.

Note: Is it ever ethical to lie to a patient?

Wednesday, December 30, 2015

Why natural science needs phenomenological philosophy

Steven M. Rosen
Prog Biophys Mol Biol. 2015 Jul 2. pii: S0079-6107(15)00083-8.

Abstract

Through an exploration of theoretical physics, this paper suggests the need for regrounding natural science in phenomenological philosophy. To begin, the philosophical roots of the prevailing scientific paradigm are traced to the thinking of Plato, Descartes, and Newton. The crisis in modern science is then investigated, tracking developments in physics, science's premier discipline. Einsteinian special relativity is interpreted as a response to the threat of discontinuity implied by the Michelson-Morley experiment, a challenge to classical objectivism that Einstein sought to counteract. We see that Einstein's efforts to banish discontinuity ultimately fall into the "black hole" predicted in his general theory of relativity. The unavoidable discontinuity that haunts Einstein's theory is also central to quantum mechanics. Here too the attempt has been made to manage discontinuity, only to have this strategy thwarted in the end by the intractable problem of quantum gravity. The irrepressible discontinuity manifested in the phenomena of modern physics proves to be linked to a merging of subject and object that flies in the face of Cartesian philosophy. To accommodate these radically non-classical phenomena, a new philosophical foundation is called for: phenomenology. Phenomenological philosophy is elaborated through Merleau-Ponty's concept of depth and is then brought into focus for use in theoretical physics via qualitative work with topology and hypercomplex numbers. In the final part of this paper, a detailed summary is offered of the specific application of topological phenomenology to quantum gravity that was systematically articulated in The Self-Evolving Cosmos (Rosen, 2008a).

The article is here.

Monday, December 7, 2015

Everyone Else Could Be a Mindless Zombie

By Kurt Gray
Time Magazine
Originally posted November 17, 2015

Here is an excerpt:

Our research reveals that whether something can think or feel is mostly a matter of perception, which can lead to bizarre reversals. Objectively speaking, humans are smarter than cats, and yet people treat their pets like people and the homeless like objects. Objectively speaking, pigs are smarter than baby seals, but people will scream about seal clubbing while eating a BLT.

That minds are perceived spells trouble for political harmony. When people see minds differently in chickens, fetuses, and enemy combatants, it leads to conflicts about vegetarianism, abortion, and torture. Despite facilitating these debates, mind perception can make our moral opponents seem more humans and less monstrous. With abortion, both liberals and conservatives agree that killing babies is immoral, and disagree only about whether a fetus is a baby or a mass of mindless cells.

The entire article is here.

Sunday, November 15, 2015

Morality takes two: Dyadic morality and mind perception.

Gray, Kurt; Wegner, Daniel M.
Mikulincer, Mario (Ed); Shaver, Phillip R. (Ed), (2012). The social psychology of morality: Exploring the causes of good and evil. Herzliya series on personality and social psychology., (pp. 109-127). Washington, DC, US: American Psychological Association

Abstract

We propose that all moral acts are (at least implicitly) dyadic, involving two different people, one as a moral agent and one as a moral patient. The idea that people cleave the moral world into agents and patients is as old as Aristotle (Freeland, 1985), but out of this simple claim—that morality takes two—grows a theory of morality with a host of implications for psychology and the real world. Dyadic morality can help explain, for instance, why victims escape blame, why people believe in God, why people harm saints, why some advocate torture, and why those who do good become more physically powerful. In this chapter, we explore the idea of dyadic morality, its extensions and implications. In particular, we examine the following four tenets of dyadic morality: 1. Morality involves a moral agent helping or harming a moral patient. 2. Morality and mind perception are linked: Agency is tied to moral agents; experience is tied to moral patients. 3. Morality requires a complete dyad: An isolated moral agent creates a moral patient; an isolated moral patient creates a moral agent. 4. Morality requires two different people as agent and patient, which means that people are perceived as either agents or patients, both in moral acts and more generally, a phenomenon called moral typecasting. We first explore the link between mind and morality, then examine dyadic help and harm, then explain how moral dyads complete themselves, and finally consider moral typecasting. Why start first with mind perception? Perceptions of mind are tightly bound to moral judgments, and as we show, the structure of mind perception is split into two complementary parts that correspond to the two parts of morality. Perceptions of mind underlie the most fundamental of moral decisions: who deserves moral rights and who deserves moral responsibility.

A copy of the chapter is here.

Friday, October 2, 2015

What Is Quantum Cognition, and How Is It Applied to Psychology?

By Jerome Busemeyer and Zheng Wang
Current Directions in Psychological Science 
June 2015 vol. 24 no. 3 163-169

Abstract

Quantum cognition is a new research program that uses mathematical principles from quantum theory as a framework to explain human cognition, including judgment and decision making, concepts, reasoning, memory, and perception. This research is not concerned with whether the brain is a quantum computer. Instead, it uses quantum theory as a fresh conceptual framework and a coherent set of formal tools for explaining puzzling empirical findings in psychology. In this introduction, we focus on two quantum principles as examples to show why quantum cognition is an appealing new theoretical direction for psychology: complementarity, which suggests that some psychological measures have to be made sequentially and that the context generated by the first measure can influence responses to the next one, producing measurement order effects, and superposition, which suggests that some psychological states cannot be defined with respect to definite values but, instead, that all possible values within the superposition have some potential for being expressed. We present evidence showing how these two principles work together to provide a coherent explanation for many divergent and puzzling phenomena in psychology.

The entire article is here.

Monday, September 21, 2015

Public trust has dwindled in America with rise in income inequality

Association for Psychological Science
Originally published September 4, 2015

Here is an excerpt:

Trust in others and confidence in societal institutions are at their lowest point in over three decades, analyses of national survey data reveal. The findings are forthcoming in Psychological Science, a journal of the Association for Psychological Science.

"Compared to Americans in the 1970s-2000s, Americans in the last few years are less likely to say they can trust others, and are less likely to believe that institutions such as government, the press, religious organizations, schools, and large corporations are 'doing a good job,'" explains psychological scientist and lead researcher Jean M. Twenge of San Diego State University.

The entire article is here.

Moral Perception

By Ana P. Gantman and Jay J. Van Bavel
Trends in Cognitive Sciences. Forthcoming

Abstract

Based on emerging research, we propose that human perception is preferentially attuned to moral content. We describe how moral concerns enhance detection of morally relevant stimuli, and both command and direct attention. These perceptual processes, in turn, have important consequences for moral judgment and behavior.

The entire paper isis here.

Tuesday, July 14, 2015

Consciousness has less control than believed

San Francisco State University
Press Release
Originally released June 23, 2015

Consciousness -- the internal dialogue that seems to govern one's thoughts and actions -- is far less powerful than people believe, serving as a passive conduit rather than an active force that exerts control, according to a new theory proposed by an SF State researcher.

Associate Professor of Psychology Ezequiel Morsella's "Passive Frame Theory" suggests that the conscious mind is like an interpreter helping speakers of different languages communicate.

"The interpreter presents the information but is not the one making any arguments or acting upon the knowledge that is shared," Morsella said. "Similarly, the information we perceive in our consciousness is not created by conscious processes, nor is it reacted to by conscious processes. Consciousness is the middle-man, and it doesn't do as much work as you think."

Morsella and his coauthors' groundbreaking theory, published online on June 22 by the journal Behavioral and Brain Sciences, contradicts intuitive beliefs about human consciousness and the notion of self.

The entire pressor is here.

Thursday, March 12, 2015

Why We Ignore the Obvious: The Psychology of Willful Blindness

By Maria Popova
BrainPickings.org

Here is an excerpt:

The concept of “willful blindness,” Heffernan explains, comes from the law and originates from legislature passed in the 19th century — it’s the somewhat counterintuitive idea that you’re responsible “if you could have known, and should have known, something that instead you strove not to see.” What’s most uneasy-making about the concept is the implication that it doesn’t matter whether the avoidance of truth is conscious. This basic mechanism of keeping ourselves in the dark, Heffernan argues, plays out in just about every aspect of life, but there are things we can do — as individuals, organizations, and nations — to lift our blinders before we walk into perilous situations that later produce the inevitable exclamation: How could I have been so blind?

The entire blog post is here.

Thursday, March 5, 2015

The Monstrous Cruelty of a Just World

It’s easy to want to believe that everything happens for a reason, but how does that affect the way we treat the people the universe has punished?

By Nicholas Hune-Brown
Hazlitt Blog
Originally published January 22, 2015

In the 1960s, a social psychologist named Melvin Lerner noticed something troubling about his colleagues. The therapists at his hospital—generally such nice, sympathetic people—seemed to be acting heartlessly towards some of their mentally ill patients, pushing and prodding them during sessions, describing the vulnerable and disturbed as shiftless manipulators. Why were these professionals, generally so kind and compassionate, treating patients as if they somehow deserved their illness?

The entire blog post is here.

Thursday, February 12, 2015

Dimensions of Moral Emotions

By Kurt Gray and Daniel M. Wegner
Emotion Review Vol. 3, No. 3 (July 2011) 258–260

Abstract

Anger, disgust, elevation, sympathy, relief. If the subjective experience of each of these emotions is the same whether elicited by moral or nonmoral events, then what makes moral emotions unique? We suggest that the configuration of moral emotions is special—a configuration given by the underlying structure of morality. Research suggests that people divide the moral world along the two dimensions of valence (help/harm) and moral type (agent/patient). The intersection of these two dimensions gives four moral exemplars—heroes, villains, victims and beneficiaries—each of which elicits unique emotions. For example, victims (harm/patient) elicit sympathy and sadness. Dividing moral emotions into these four quadrants provides predictions about which emotions reinforce, oppose and complement each other.

The entire article is here.

Thursday, December 11, 2014

Moral Evaluations Depend Upon Mindreading Moral Occurrent Beliefs

By Clayton R. Critcher, Erik G. Helzer, David Tannenbaum, and David A. Pizarro

Abstract

People evaluate the moral character of others not merely based on what they do, but why they do
it. Because an agent’s state of mind is not directly observable, people typically engage in
mindreading—attempts at inferring mental states—when forming moral evaluations. The present
paper identifies a heretofore unstudied focus of mindreading, moral occurrent beliefs—the
cognitions (e.g., thoughts, beliefs, principles, concerns, rules) accessible in an agent’s mind
while confronting a morally-relevant decision that could provide a moral justification for a
particular course of action. Whereas previous mindreading research has examined how people
“reason back” to make sense of why agents behaved as they did, we instead ask how mindread
occurrent beliefs (MOBs) constrain moral evaluations for an agent’s subsequent actions. Our
studies distinguish three accounts of how MOBs influence moral evaluations, show that people
rely on MOBs spontaneously (instead of merely when experimental measures draw attention to
them), and identify non-moral cues (e.g., whether the situation demands a quick decision) that
guide MOBs. Implications for theory of mind, moral psychology, and social cognition are
discussed.

The entire paper is here.

Thursday, November 27, 2014

How Your Brain Decides Without You

In a world full of ambiguity, we see what we want to see.

By Tom Vanderbilt
Nautilus
Originally published on November 6, 2014

Here is an excerpt:

The structure of the brain, she notes, is such that there are many more intrinsic connections between neurons than there are connections that bring sensory information from the world. From that incomplete picture, she says, the brain is “filling in the details, making sense out of ambiguous sensory input.” The brain, she says, is an “inference generating organ.” She describes an increasingly well-supported working hypothesis called predictive coding, according to which perceptions are driven by your own brain and corrected by input from the world. There would otherwise simple be too much sensory input to take in. “It’s not efficient,” she says. “The brain has to find other ways to work.” So it constantly predicts. When “the sensory information that comes in does not match your prediction,” she says, “you either change your prediction—or you change the sensory information that you receive.”

Sunday, November 23, 2014

The Philosophical Implications of the Urge to Urinate

The state of our body affects how we think the world works

by Daniel Yudkin
Scientific American
Originally published November 4, 2014

If one thing’s for sure, it’s that I decided what breakfast cereal to eat this morning. I opened the cupboard, I perused the options, and when I ultimately chose the Honey Bunches of Oats over the Kashi Good Friends, it came from a place of considered judgment, free from external constraints and predetermined laws.

Or did it? This question—about how much people are in charge of their own actions—is among the most central to the human condition. Do we have free will? Are we in control of our destiny? Do we choose the proverbial Honey Bunches of Oats? Or does the cereal—or some other mysterious force in the vast and unknowable universe—choose us?

The entire article is here.

Saturday, November 1, 2014

Are We Really Conscious?

By Michael Graziano
The New York Times Sunday Review
Originally published October 10, 2014

Here is an excerpt:

The brain builds models (or complex bundles of information) about items in the world, and those models are often not accurate. From that realization, a new perspective on consciousness has emerged in the work of philosophers like Patricia S. Churchland and Daniel C. Dennett. Here’s my way of putting it:

How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong. The machinery is computing an elaborate story about a magical-seeming property. And there is no way for the brain to determine through introspection that the story is wrong, because introspection always accesses the same incorrect information.

The entire article is here.

Sunday, September 7, 2014

responsibility and punishment

Katrina Sifferd interviewed by Richard Marshall
3:AM Magazine
Originally posted

Here is an excerpt:

KS: Well, for one, we won’t be able to make responsibility assessments. When you show a jury a picture of a brain lighting up in such-and-such a way it means absolutely nothing to them until somebody translates the scientific data into folk psychological terms. Expert witnesses in a trial cannot just point to a dark spot on a PET scan and sit down: the scientific data is irrelevant to the defendant’s culpability until is it translated into folk concepts that push and pull responsibility assessments in different directions. For example, an expert might note that the dark spot is a brain tumor likely to result in a severe lack of impulse control, which the jury might feel undermines attribution of the highest levels of criminal intent.

I think it is interesting that some scientific data actually seems to push responsibility assessments in both directions, or in ways unanticipated by the side offering the evidence in a criminal trial. In one high profile capital sentencing hearing, the defense offered neuroscientific evidence of psychopathy in an attempt to prove diminished capacity (and thus a mitigating factor); but instead, the jury seemed to think the data made the defendant more culpable for his actions, and sentenced him to death. Is a person whose brain shows clear signs of psychopathy less responsible because of their abnormal brain function or more responsible because their brain is abnormal (and thus they are likely to be dangerous in the future)? I think it depends on the way in which the brain is dysfunctional, and maybe the reasons why it is dysfunctional. There is a lot of important work to be done making reliable translations of neuroscientific data into folk descriptions relevant to responsibility.

(cut)

KS: Different theories of punishment seem to emphasize different aspects of our cognitive capacities as most important to culpability. Bill and I have argued that deontological accounts which postulate emotional response or empathy as crucial to moral knowledge and decision-making might be more likely to excuse all psychopaths because of their apparent lack of relevant affective data. Some deontological theorists believe that a lack of appropriate emotional response translates into a wholesale lack of legal rationality. A consequentialist theory of punishment, however, may be more likely to hold some psychopaths responsible, because it emphasizes the need for rational capacities as a means to grasp and reflect upon the consequences of action given ones goals and relevant social norms (a skill successful psychopaths may possess), and not the way one feels about these consequences.

The entire interview is here.

Friday, August 15, 2014

Our brains judge a face's trustworthiness, even when we can’t see it

Science Daily
Originally posted August 5, 2014

Our brains are able to judge the trustworthiness of a face even when we cannot consciously see it, a team of scientists has found. Their findings, which appear in the Journal of Neuroscience, shed new light on how we form snap judgments of others.

"Our findings suggest that the brain automatically responds to a face's trustworthiness before it is even consciously perceived," explains Jonathan Freeman, an assistant professor in New York University's Department of Psychology and the study's senior author.

The entire article is here.