Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Decisions. Show all posts
Showing posts with label Decisions. Show all posts

Saturday, April 1, 2023

The effect of reward prediction errors on subjective affect depends on outcome valence and decision context

Forbes, L., & Bennett, D. (2023, January 20). 
https://doi.org/10.31234/osf.io/v86bx

Abstract

The valence of an individual’s emotional response to an event is often thought to depend on their prior expectations for the event: better-than-expected outcomes produce positive affect and worse-than-expected outcomes produce negative affect. In recent years, this hypothesis has been instantiated within influential computational models of subjective affect that assume the valence of affect is driven by reward prediction errors. However, there remain a number of open questions regarding this association. In this project, we investigated the moderating effects of outcome valence and decision context (Experiment 1: free vs. forced choices; Experiment 2: trials with versus trials without counterfactual feedback) on the effects of reward prediction errors on subjective affect. We conducted two large-scale online experiments (N = 300 in total) of general-population samples recruited via Prolific to complete a risky decision-making task with embedded high-resolution sampling of subjective affect. Hierarchical Bayesian computational modelling revealed that the effects of reward prediction errors on subjective affect were significantly moderated by both outcome valence and decision context. Specifically, after accounting for concurrent reward amounts we found evidence that only negative reward prediction errors (worse-than-expected outcomes) influenced subjective affect, with no significant effect of positive reward prediction errors (better-than-expected outcomes). Moreover, these effects were only apparent on trials in which participants made a choice freely (but not on forced-choice trials) and when counterfactual feedback was absent (but not when counterfactual feedback was present). These results deepen our understanding of the effects of reward prediction errors on subjective affect.

From the General Discussion section

Our findings were twofold: first, we found that after accounting for the effects of concurrent reward amounts (gains/losses of points) on affect, the effects of RPEs were subtler and more nuanced than has been previously appreciated. Specifically, contrary to previous research, we found that only negative RPEs influenced subjective affect within our task, with no discernible effect of positive RPEs.  Second, we found that even the effect of negative RPEs on affect was dependent on the decision context within which the RPEs occurred.  We manipulated two features of decision context (Experiment 1: free-choice versus forced-choice trials; Experiment 2: trials with counterfactual feedback versus trials without counterfactual feedback) and found that both features of decision context significantly moderated the effect of negative RPEs on subjective affect. In Experiment 1, we found that negative RPEs only influenced subjective affect in free-choice trials, with no effect of negative RPEs in forced-choice trials. In Experiment 2, we similarly found that negative RPEs only influenced subjective affect when counterfactual feedback was absent, with no effect of negative RPEs when counterfactual feedback was present. We unpack and discuss each of these results separately below.


Editor's synopsis: As with large amounts of other research, "bad" is stronger than "good" in making appraisals and decisions, in context of free (not forced) choice and no counterfactual information available.

Important data points when working with patient who are making large life decisions.

Sunday, June 19, 2022

Anti-Black Racism as a Chronic Condition

Nneka Sederstrom and Tamika Lasege, 
In A Critical Moment in Bioethics: Reckoning 
with Anti-Black Racism through Intergenerational 
Dialogue,  ed.  Faith  E.  Fletcher  et  al., 
Special  Report, Hastings Center Report 52, no. 2 
(2022):  S24-S29.

Abstract

Because America has a foundation of anti-Black racism, being born Black in this nation yields an identity that breeds the consequences of a chronic condition. This article highlights several ways in which medicine and clinical ethics, despite the former's emphasis on doing no harm and the latter's emphasis on nonmaleficence, fail to address or acknowledge some of the key ways in which physicians can—and do—harm patients of color. To understand harm in a way that can provide real substance for ethical standards in the practice of medicine, physicians need to think about how treatment decisions are constrained by a patient's race. The color of one's skin can and does negatively affect the quality of a person's diagnosis, promoted care plan, and prognosis. Yet racism in medicine and bioethics persist—because a racist system serves the interests of the dominant caste, White people. As correctives to this system, the authors propose several antiracist commitments physicians or ethicists can make.

(cut)

Here are some commitments to add to a newly revised Hippocratic oath: We shall stop denying that racism exists in medicine. We shall face the reality that we fail to train and equip our clinicians with the ability to effectively make informed clinical decisions using the reality of how race impacts health outcomes. We shall address the lack of the declaration of racism as a bioethics priority and work to train ethicists on how to engage in antiracism work. We shall own the effects of racism at every level in health care and the academy. Attempting to talk about everything except racism is another form of denial, privilege, and power that sustains racism. We will not have conversations about disproportionally high rates of “minority” housing insecurity, food scarcity, noncompliance with treatment plans, “drug-seeking behavior,” complex social needs, or “disruptive behavior” or rely on any other terms that are disguised proxies for racism without explicitly naming racism. As ethicists, we will not engage in conversations around goal setting, value judgments, benefits and risks of interventions, autonomy and capacity, or any other elements around the care of patients without naming racism.

So where do we go from here? How do we address the need to decolonize medicine and bioethics? When do we stop being inactive and start being proactive? It starts upstream with improving the medical education and bioethics curricula to accurately and thoroughly inform students on the social and biological sciences of human beings who are not White in America. Then, and only then, will we breed a generation of race-conscious clinicians and ethicists who can understand and interpret the historic inequities in our system and ultimately be capable of providing medical care and ethical analysis that reflect the diversity of our country. Clinical ethics program development must include antiracism training to develop clinical ethicists who have the skills to recognize and address racism at the bedside in clinical ethics consultation. It requires changing the faces in the field and addressing the extreme lack of racial diversity in bioethics. Increasing the number of clinicians of color in all professions within medicine, but especially the numbers of physicians, advance practice providers, and clinical ethicists, is imperative to the goal of improving patient outcomes for Black and brown populations.

Thursday, July 8, 2021

Free Will and Neuroscience: Decision Times and the Point of No Return

Alfred Mele
In Free Will, Causality, & Neuroscience
Chapter 4

Here are some excerpts:

Decisions to do things, as I conceive of them, are momentary actions of forming an intention to do them. For example, to decide to flex my right wrist now is to perform a (nonovert) action of forming an intention to flex it now (Mele 2003, ch. 9). I believe that Libet understands decisions in the same way. Some of our decisions and intentions are for the nonimmediate future and others are not. I have an intention today to fly to Brussels three days from now, and I have an intention now to click my “save” button now. The former intention is aimed at action three days in the future. The latter intention is about what to do now. I call intentions of these kinds, respectively, distal and proximal intentions (Mele 1992, pp. 143–44, 158, 2009, p. 10), and I make the same distinction in the sphere of decisions to act. Libet studies proximal intentions (or decisions or urges) in particular.

(cut)

Especially in the case of the study now under discussion, readers unfamiliar with Libet-style experiments may benefit from a short description of my own experience as a participant in such an experiment (see Mele 2009, pp. 34–36). I had just three things to do: watch a Libet clock with a view to keeping track of when I first became aware of something like a proximal urge, decision, or intention to flex; flex whenever I felt like it (many times over the course of the experiment); and report, after each flex, where I believed the hand was on the clock at the moment of first awareness. (I reported this belief by moving a cursor to a point on the clock. The clock was very fast; it made a complete revolution in about 2.5 seconds.) Because I did not experience any proximal urges, decisions, or intentions to flex, I hit on the strategy of saying “now!” silently to myself just before beginning to flex. This is the mental event that I tried to keep track of with the assistance of the clock. I thought of the “now!” as shorthand for the imperative “flex now!” – something that may be understood as an expression of a proximal decision to flex.

Why did I say “now!” exactly when I did? On any given trial, I had before me a string of equally good moments for a “now!” – saying, and I arbitrarily picked one of the moments. 3 But what led me to pick the moment I picked? The answer offered by Schurger et al. is that random noise crossed a decision threshold then. And they locate the time of the crossing very close to the onset of muscle activity – about 100 ms before it (pp. E2909, E2912). They write: “The reason we do not experience the urge to move as having happened earlier than about 200 ms before movement onset [referring to Libet’s partipants’ reported W time] is simply because, at that time, the neural decision to move (crossing the decision threshold) has not yet been made” (E2910). If they are right, this is very bad news for Libet. His claim is that, in his experiments, decisions are made well before the average reported W time: −200 ms. (In a Libet-style experiment conducted by Schurger et al., average reported W time is −150 ms [p. E2905].) As I noted, if relevant proximal decisions are not made before W, Libet’s argument for the claim that they are made unconsciously fails.

Tuesday, August 4, 2020

When a Patient Regrets Having Undergone a Carefully and Jointly Considered Treatment Plan, How Should Her Physician Respond?

L. V. Selby and others
AMA J Ethics. 2020;22(5):E352-357.
doi: 10.1001/amajethics.2020.352.

Abstract

Shared decision making is best utilized when a decision is preference sensitive. However, a consequence of choosing between one of several reasonable options is decisional regret: wishing a different decision had been made. In this vignette, a patient chooses mastectomy to avoid radiotherapy. However, postoperatively, she regrets the more disfiguring operation and wishes she had picked the other option: lumpectomy and radiation. Although the physician might view decisional regret as a failure of shared decision making, the physician should reflect on the process by which the decision was made. If the patient’s wishes and values were explored and the decision was made in keeping with those values, decisional regret should be viewed as a consequence of decision making, not necessarily as a failure of shared decision making.

(cut)

Commentary

This case vignette highlights decisional regret, which is one of the possible consequences of the patient decision-making process when there are multiple treatment options available. Although the process of shared decision making, which appears to have been carried out in this case, is utilized to help guide the patient and the physician to come to a mutually acceptable and optimal health care decision, it clearly does not always obviate the risk of a patient’s regretting that decision after treatment. Ironically, the patient might end up experiencing more regret after participating in a decision-making process in which more rather than fewer options are presented and in which the patient perceives the process as collaborative rather than paternalistic. For example, among men with prostate cancer, those with lower levels of decisional involvement had lower levels of decisional regret. We argue that decisional regret does not mean that shared decision making is not best practice, even though it can result in patients being reminded of their role in the decision and associated personal regret with that decision.

The info is here.

Thursday, May 7, 2020

What Is 'Decision Fatigue' and How Does It Affect You?

Rachel Fairbank
LifeHacker
Originally published 14 April 20

Here is an excerpt:

Too many decisions result in emotional and mental strain

“These are legitimately difficult decisions,” Fischhoff says, adding that people shouldn’t feel bad about struggling with them. “Feeling bad is adding insult to injury,” he says.

This added complexity to our decisions is leading to decision fatigue, which is the emotional and mental strain that comes when we are forced to make too many choices. Decision fatigue is the reason why thinking through a decision is harder when we are stressed or tired.

“These are difficult decisions because the stakes are often really high, while we are required to master unfamiliar information,” Fischhoff says.

But if all of this sounds like too much, there are actions we can take to reduce decision fatigue. For starters, it’s best to minimize the number of small decisions, such as what to eat for dinner or what to wear, you make in a day. The fewer smaller decisions you have to make, the more bandwidth you’ll have for the bigger one.

For this particular crisis, there are a few more steps you can take, in order to reduce your decision fatigue.

The info is here.

Tuesday, January 2, 2018

The Neuroscience of Changing Your Mind

 Bret Stetka
Scientific American
Originally published on December 7, 2017

Here are two excerpts:

Scientists have long accepted that our ability to abruptly stop or modify a planned behavior is controlled via a single region within the brain’s prefrontal cortex, an area involved in planning and other higher mental functions. By studying other parts of the brain in both humans and monkeys, however, a team from Johns Hopkins University has now concluded that last-minute decision-making is a lot more complicated than previously known, involving complex neural coordination among multiple brain areas. The revelations may help scientists unravel certain aspects of addictive behaviors and understand why accidents like falls grow increasingly common as we age, according to the Johns Hopkins team.

(cut)

Tracking these eye movements and neural action let the researchers resolve the very confusing question of what brain areas are involved in these split-second decisions, says Vanderbilt University neuroscientist Jeffrey Schall, who was not involved in the research. “By combining human functional brain imaging with nonhuman primate neurophysiology, [the investigators] weave together threads of research that have too long been separate strands,” he says. “If we can understand how the brain stops or prevents an action, we may gain ability to enhance that stopping process to afford individuals more control over their choices.”

The article is here.

Monday, December 18, 2017

Unconscious Patient With 'Do Not Resuscitate' Tattoo Causes Ethical Conundrum at Hospital

George Dvorsky
Gizmodo
Originally published November 30, 2017

When an unresponsive patient arrived at a Florida hospital ER, the medical staff was taken aback upon discovering the words “DO NOT RESUSCITATE” tattooed onto the man’s chest—with the word “NOT” underlined and with his signature beneath it. Confused and alarmed, the medical staff chose to ignore the apparent DNR request—but not without alerting the hospital’s ethics team, who had a different take on the matter.

But with the “DO NOT RESUSCITATE” tattoo glaring back at them, the ICU team was suddenly confronted with a serious dilemma. The patient arrived at the hospital without ID, the medical staff was unable to contact next of kin, and efforts to revive or communicate with the patient were futile. The medical staff had no way of knowing if the tattoo was representative of the man’s true end-of-life wishes, so they decided to play it safe and ignore it.

The article is here.

Tuesday, December 5, 2017

Turning Conservatives Into Liberals: Safety First

John Bargh
The Washington Post
Originally published November 22, 2017

Here is an excerpt:

But if they had instead just imagined being completely physically safe, the Republicans became significantly more liberal — their positions on social attitudes were much more like the Democratic respondents. And on the issue of social change in general, the Republicans’ attitudes were now indistinguishable from the Democrats. Imagining being completely safe from physical harm had done what no experiment had done before — it had turned conservatives into liberals.

In both instances, we had manipulated a deeper underlying reason for political attitudes, the strength of the basic motivation of safety and survival. The boiling water of our social and political attitudes, it seems, can be turned up or down by changing how physically safe we feel.

This is why it makes sense that liberal politicians intuitively portray danger as manageable — recall FDR’s famous Great Depression era reassurance of “nothing to fear but fear itself,” echoed decades later in Barack Obama’s final State of the Union address — and why President Trump and other Republican politicians are instead likely to emphasize the dangers of terrorism and immigration, relying on fear as a motivator to gain votes.

In fact, anti-immigration attitudes are also linked directly to the underlying basic drive for physical safety. For centuries, arch-conservative leaders have often referred to scapegoated minority groups as “germs” or “bacteria” that seek to invade and destroy their country from within. President Trump is an acknowledged germaphobe, and he has a penchant for describing people — not only immigrants but political opponents and former Miss Universe contestants — as “disgusting.”

The article is here.

Sunday, September 17, 2017

The behavioural ecology of irrational behaviours

Philippe Huneman Johannes Martens
History and Philosophy of the Life Sciences
September 2017, 39:23

Abstract

Natural selection is often envisaged as the ultimate cause of the apparent rationality exhibited by organisms in their specific habitat. Given the equivalence between selection and rationality as maximizing processes, one would indeed expect organisms to implement rational decision-makers. Yet, many violations of the clauses of rationality have been witnessed in various species such as starlings, hummingbirds, amoebas and honeybees. This paper attempts to interpret such discrepancies between economic rationality (defined by the main axioms of rational choice theory) and biological rationality (defined by natural selection). After having distinguished two kinds of rationality we introduce irrationality as a negation of economic rationality by biologically rational decision-makers. Focusing mainly on those instances of irrationalities that can be understood as exhibiting inconsistency in making choices, i.e. as non-conformity of a given behaviour to axioms such as transitivity or independence of irrelevant alternatives, we propose two possible families of Darwinian explanations that may account for these apparent irrationalities. First, we consider cases where natural selection may have been an indirect cause of irrationality. Second, we consider putative cases where violations of rationality axioms may have been directly favored by natural selection. Though the latter cases (prima facie) seem to clearly contradict our intuitive representation of natural selection as a process that maximizes fitness, we argue that they are actually unproblematic; for often, they can be redescribed as cases where no rationality axiom is violated, or as situations where no adaptive solution exists in the first place.

The article is here.

Friday, August 25, 2017

A philosopher who studies life changes says our biggest decisions can never be rational

Olivia Goldhill
Quartz.com
Originally published August 13, 2017

At some point, everyone reaches a crossroads in life: Do you decide to take that job and move to a new country, or stay put? Should you become a parent, or continue your life unencumbered by the needs of children?

Instinctively, we try to make these decisions by projecting ourselves into the future, trying to imagine which choice will make us happier. Perhaps we seek counsel or weigh up evidence. We might write out a pro/con list. What we are doing, ultimately, is trying to figure out whether or not we will be better off working for a new boss and living in Morocco, say, or raising three beautiful children.

This is fundamentally impossible, though, says philosopher L.A. Paul at the University of North Carolina at Chapel Hill, a pioneer in the philosophical study of transformative experiences. Certain life choices are so significant that they change who we are. Before undertaking those choices, we are unable to evaluate them from the perspective and values of our future, changed selves. In other words, your present self cannot know whether your future self will enjoy being a parent or not.

The article is here.

Tuesday, July 11, 2017

Moral Judgments and Social Stereotypes: Do the Age and Gender of the Perpetrator and the Victim Matter?

Qiao Chu, Daniel Grühn
Social Psychological and Personality Science
First Published June 19, 2017

Abstract
We investigated how moral judgments were influenced by (a) the age and gender of the moral perpetrator and victim, (b) the moral judge’s benevolent ageism and benevolent sexism, and (c) the moral judge’s gender. By systematically manipulating the age and gender of the perpetrators and victims in moral scenarios, participants in two studies made judgments about the moral transgressions. We found that (a) people made more negative judgments when the victims were old or female rather than young or male, (b) benevolent ageism influenced people’s judgments about young versus old perpetrators, and (c) people had differential moral expectations of perpetrators who belonged to their same-gender group versus opposite-gender group. The findings suggest that age and gender stereotypes are so salient to bias people’s moral judgments even when the transgression is undoubtedly intentional and hostile.

The article is here.

Friday, February 10, 2017

Dysfunction Disorder

Joaquin Sapien
Pro Publica
Originally published on January 17, 2017

Here is an excerpt:

The mental health professionals in both cases had been recruited by Montego Medical Consulting, a for-profit company under contract with New York City's child welfare agency. For more than a decade, Montego was paid hundreds of thousands of dollars a year by the city to produce thousands of evaluations in Family Court cases -- of mothers and fathers, spouses and children. Those evaluations were then shared with judges making decisions of enormous sensitivity and consequence: whether a child could stay at home or if they'd be safer in foster care; whether a parent should be enrolled in a counseling program or put on medication; whether parents should lose custody of their children altogether.

In 2012, a confidential review done at the behest of frustrated lawyers and delivered to the administrative judge of Family Court in New York City concluded that the work of the psychologists lined up by Montego was inadequate in nearly every way. The analysis matched roughly 25 Montego evaluations against 20 criteria from the American Psychological Association and other professional guidelines. None of the Montego reports met all 20 criteria. Some met as few as five. The psychologists used by Montego often didn't actually observe parents interacting with children. They used outdated or inappropriate tools for psychological assessments, including one known as a "projective drawing" exercise.

(cut)

Attorneys and psychologists who have worked in Family Court say judges lean heavily on assessments made by psychologists, often referred to as "forensic evaluators." So do judges themselves.

"In many instances, judges rely on forensic evaluators more than perhaps they should," said Jody Adams, who served as a Family Court judge in New York City for nearly 20 years before leaving the bench in 2012. "They should have more confidence in their own insight and judgment. A forensic evaluator's evidence should be a piece of the judge's decision, but not determinative. These are unbelievably difficult decisions; these are not black and white; they are filled with gray areas and they have lifelong consequences for children and their families. So it's human nature to want to look for help where you can get it."

The article is here.

Saturday, December 24, 2016

The Adaptive Utility of Deontology: Deontological Moral Decision-Making Fosters Perceptions of Trust and Likeability

Sacco, D.F., Brown, M., Lustgraaf, C.J.N. et al.
Evolutionary Psychological Science (2016).
doi:10.1007/s40806-016-0080-6

Abstract

Although various motives underlie moral decision-making, recent research suggests that deontological moral decision-making may have evolved, in part, to communicate trustworthiness to conspecifics, thereby facilitating cooperative relations. Specifically, social actors whose decisions are guided by deontological (relative to utilitarian) moral reasoning are judged as more trustworthy, are preferred more as social partners, and are trusted more in economic games. The current study extends this research by using an alternative manipulation of moral decision-making as well as the inclusion of target facial identities to explore the potential role of participant and target sex in reactions to moral decisions. Participants viewed a series of male and female targets, half of whom were manipulated to either have responded to five moral dilemmas consistent with an underlying deontological motive or utilitarian motive; participants indicated their liking and trust toward each target. Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.

The research is here.

Editor's Note: This research may apply to psychotherapy, leadership style, and politics.

Wednesday, December 21, 2016

Empathy, Schmempathy

By Tom Bartlett
The Chronicle of Higher Education
Originally posted November 27, 2016

No one argues in favor of empathy. That’s because no one needs to: Empathy is an unalloyed good, like sunshine or cake or free valet parking. Instead we bemoan lack of empathy and nod our heads at the notion that, if only we could feel the pain of our fellow man, then everything would be OK and humanity could, at long last, join hands together in song.

Bah, says Paul Bloom. In his new book, Against Empathy: The Case for Rational Compassion (Ecco), Bloom argues that when it comes to helping one another, our emotions too often spoil everything. Instead of leading us to make smart decisions about how best to use our limited resources altruistically, they cause us to focus on what makes us feel good in the moment. We worry about the boy stuck in the well rather than the thousands of boys dying of malnutrition every day.

Bloom, a professor of psychology at Yale University, calls on us to feel less and think more.

The interview is here.

Monday, November 28, 2016

Studying ethics, 'Star Trek' style, at Drake

Daniel P. Finney
The Des Moines Register
Originally posted November 10, 2016

Here is an excerpt:

Sure, the discussion was about ethics of the fictional universe of “Star Trek.” But fiction, like all art, reflects the human condition.

The issue Capt. Sisko wrestled with had parallels to the real world.

Some historians hold the controversial assertion that President Franklin D. Roosevelt knew of the impending attack on Pearl Harbor in 1941 but allowed it to happen to bring the United States into World War II, a move the public opposed before the attack.

In more recent times, former President George W. Bush’s administration used faulty intelligence suggesting Iraq possessed weapons of mass destruction to justify a war that many believed would stabilize the increasingly sectarian Middle East. It did not.

The article is here.

Monday, October 24, 2016

Are Biases Hurting Your Health?

By Stacey Colino
US New and World Report
Originally published October 5, 2016

It's human nature to have cognitive biases. These tendencies to think in certain ways or process information by filtering it through your personal preferences, beliefs and experiences are normal, but they can offer a skewed perspective.

"We all have these biases – they are the lenses through which we process information and they are a necessary part of the information-selection process," says Mark Reinecke, professor and chief psychologist at Northwestern University and Northwestern Memorial Hospital in Chicago. Even physicians and mental health professionals have cognitive biases when making decisions for their own health and while treating patients.

Meanwhile, certain subtle mental biases can affect the health choices you make on a daily basis – often without your realizing it. This can include everything from the dietary and physical activity choices you make to the screening tests you choose to the medications you take. Sometimes these biases are harmless while other times they could be problematic.

The article is here.

Tuesday, October 11, 2016

When fairness matters less than we expect

Gus Cooney, Daniel T. Gilbert, and Timothy D. Wilson
PNAS 2016 ; published ahead of print September 16, 2016

Abstract

Do those who allocate resources know how much fairness will matter to those who receive them? Across seven studies, allocators used either a fair or unfair procedure to determine which of two receivers would receive the most money. Allocators consistently overestimated the impact that the fairness of the allocation procedure would have on the happiness of receivers (studies 1–3). This happened because the differential fairness of allocation procedures is more salient before an allocation is made than it is afterward (studies 4 and 5). Contrary to allocators’ predictions, the average receiver was happier when allocated more money by an unfair procedure than when allocated less money by a fair procedure (studies 6 and 7). These studies suggest that when allocators are unable to overcome their own preallocation perspectives and adopt the receivers’ postallocation perspectives, they may allocate resources in ways that do not maximize the net happiness of receivers.

Significance

Human beings care a great deal about the fairness of the procedures that are used to allocate resources, such as wealth, opportunity, and power. But in a series of experiments, we show that those to whom resources are allocated often care less about fairness than those who allocate the resources expect them to. This “allocator’s illusion” results from the fact that fairness seems more important before an allocation is made (when allocators are choosing a procedure) than afterward (when receivers are reacting to the procedure that allocators chose). This illusion has important consequences for policy-makers, managers, health care providers, judges, teachers, parents, and others who are charged with choosing the procedures by which things of value will be allocated.

The article is here.

Tuesday, August 16, 2016

Trust Your Gut or Think Carefully? Empathy Research

Ma-Kellams, C., & Lerner, J.
Journal of Personality and Social Psychology
Online First Publication, July 21, 2016.
http://dx.doi.org/10.1037/pspi0000063


Abstract:    

Cultivating successful personal and professional relationships requires the ability to accurately infer the feelings of others — i.e., to be empathically accurate. Some are better than others at this, which may be explained by mode of thought, among other factors. Specifically, it may be that empathically-accurate people tend to rely more on intuitive rather than systematic thought when perceiving others. Alternatively, it may be the reverse — that systematic thought increases accuracy. In order to determine which view receives empirical support, we conducted four studies examining relations between mode of thought (intuitive versus systematic) and empathic accuracy. Study 1 revealed a lay belief that empathic accuracy arises from intuitive modes of thought. Studies 2-4, each using executive-level professionals as participants, demonstrated that (contrary to lay beliefs) people who tend to rely on intuitive thinking also tend to exhibit lower empathic accuracy. This pattern held when participants inferred others’ emotional states based on (a) in-person face-to-face interactions with partners (Study 2) as well as on (b) pictures with limited facial cues (Study 3). Study 4 confirmed that the relationship is causal: experimentally inducing systematic (as opposed to intuitive) thought led to improved empathic accuracy. In sum, evidence regarding personal and social processes in these four samples of working professionals converges on the conclusion that — contrary to lay beliefs — empathic accuracy arises more from systematic thought than from gut intuition.

The article is here.

Editor's Note: This article has profound implications for psychotherapy.

Friday, May 6, 2016

Complex ideas can enter consciousness automatically

Science Daily
Originally posted April 18, 2016

Summary

New research provides further evidence for 'passive frame theory,' the groundbreaking idea that suggests human consciousness is less in control than previously believed. The study shows that even complex concepts, such as translating a word into pig latin, can enter your consciousness automatically, even when someone tells you to avoid thinking about it. The research provides the first evidence that even a small amount of training can cause unintentional, high-level symbol manipulation.

Here is an excerpt:

This surprising effect offers further evidence that the contents of our consciousness -- the state of being awake and aware of our surroundings -- are often generated involuntarily, said Morsella, an assistant professor of psychology. In fact, the study published in the journal Acta Psychologica provides the first demonstration that even a small amount of training can cause unintentional, high-level symbol manipulation.

The article is here.

Tuesday, February 23, 2016

Do Emotions and Morality Mix?

By Lauren Cassani Davis
The Atlantic
Originally published February 5, 2016

Daily life is peppered with moral decisions. Some are so automatic that they fail to register—like holding the door for a mother struggling with a stroller, or resisting a passing urge to elbow the guy who cut you in line at Starbucks. Others chafe a little more, like deciding whether or not to give money to a figure rattling a cup of coins on a darkening evening commute. A desire to help, a fear of danger, and a cost-benefit analysis of the contents of my wallet; these gut reactions and reasoned arguments all swirl beneath conscious awareness.

While society urges people towards morally commendable choices with laws and police, and religious traditions stipulate good and bad through divine commands, scriptures, and sermons, the final say lies within each of our heads. Rational thinking, of course, plays a role in how we make moral decisions. But our moral compasses are also powerfully influenced by the fleeting forces of disgust, fondness, or fear.

Should subjective feelings matter when deciding right and wrong? Philosophers have debated this question for thousands of years. Some say absolutely: Emotions, like our love for our friends and family, are a crucial part of what give life meaning, and ought to play a guiding role in morality. Some say absolutely not: Cold, impartial, rational thinking is the only proper way to make a decision. Emotion versus reason—it’s one of the oldest and most epic standoffs we know.

The article is here.