Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Motivation. Show all posts
Showing posts with label Motivation. Show all posts

Saturday, June 22, 2019

Morality and Self-Control: How They are Intertwined, and Where They Differ

Wilhelm Hofmann, Peter Meindl, Marlon Mooijman, & Jesse Graham
PsyArXiv Preprints
Last edited November 18, 2018

Abstract

Despite sharing conceptual overlap, morality and self-control research have led largely separate lives. In this article, we highlight neglected connections between these major areas of psychology. To this end, we first note their conceptual similarities and differences. We then show how morality research, typically emphasizing aspects of moral cognition and emotion, may benefit from incorporating motivational concepts from self-control research. Similarly, self-control research may benefit from a better understanding of the moral nature of many self-control domains. We place special focus on various components of self-control and on the ways in which self-control goals may be moralized.

(cut)

Here is the Conclusion:

How do we resist temptation, prioritizing our future well-being over our present pleasure? And how do we resist acting selfishly, prioritizing the needs of others over our own self-interest? These two questions highlight the links between understanding self-control and understanding morality. We hope we have shown that morality and self-control share considerable conceptual overlap with regard to the way people regulate behavior in line with higher-order values and standards. As the psychological study of both areas becomes increasingly collaborative and integrated, insights from each subfield can better enable research and interventions to increase human health and flourishing.

The info is here.

Tuesday, June 11, 2019

Moral character: What it is and what it does

Cohen, T. R., & Morse, L. (2014).
In A. P. Brief & B. M. Staw (Eds.), Research in Organizational Behavior.

Abstract

Moral character can be conceptualized as an individual’s disposition to think, feel, and behave in an ethical versus unethical manner, or as the subset of individual differences relevant to morality. This essay provides an organizing framework for understanding moral character and its relationship to ethical and unethical work behaviors. We present a tripartite model for understanding moral character, with the idea that there are motivational, ability, and identity elements. The motivational element is consideration of others—referring to a disposition toward considering the needs and interests of others, and how one’s own actions affect other people. The ability element is self-regulation—referring to a disposition toward regulating one’s behavior effectively, specifically with reference to behaviors that have positive short-term consequences but negative long-term consequences for oneself or others. The identity element is moral identity—referring to a disposition toward valuing morality and wanting to view oneself as a moral person. After unpacking what moral character is, we turn our attention to what moral character does, with a focus on how it influences unethical behavior, situation selection, and situation creation. Our research indicates that the impact of moral character on work outcomes is significant and consequential, with important implications for research and practice in organizational behavior.

A copy can be downloaded here.

Thursday, March 14, 2019

Actions speak louder than outcomes in judgments of prosocial behavior.

Yudkin, D. A., Prosser, A. M. B., & Crockett, M. J. (2018).
Emotion. Advance online publication.
http://dx.doi.org/10.1037/emo0000514

Abstract

Recently proposed models of moral cognition suggest that people’s judgments of harmful acts are influenced by their consideration both of those acts’ consequences (“outcome value”), and of the feeling associated with their enactment (“action value”). Here we apply this framework to judgments of prosocial behavior, suggesting that people’s judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Friday, February 8, 2019

Empathy is hard work: People choose to avoid empathy because of its cognitive costs

Daryl Cameron, Cendri Hutcherson, Amanda Ferguson,  and others
PsyArXiv Preprints
Last edited January 25, 2019

Abstract

Empathy is considered a virtue, yet fails in many situations, leading to a basic question: when given a choice, do people avoid empathy? And if so, why? Whereas past work has focused on material and emotional costs of empathy, here we examined whether people experience empathy as cognitively taxing and costly, leading them to avoid it. We developed the Empathy Selection Task, which uses free choices to assess desire to empathize. Participants make a series of binary choices, selecting situations that lead them to engage in empathy or an alternative course of action. In each of 11 studies (N=1,204) and a meta-analysis, we found a robust preference to avoid empathy, which was associated with perceptions of empathy as effortful, aversive, and inefficacious. Experimentally increasing empathy efficacy eliminated empathy avoidance, suggesting cognitive costs directly cause empathy choice. When given the choice to share others’ feelings, people act as if it’s not worth the effort.

The research is here.

Sunday, October 28, 2018

Moral enhancement and the good life

Hazem Zohny
Med Health Care and Philos (2018).
https://doi.org/10.1007/s11019-018-9868-4

Abstract

One approach to defining enhancement is in the form of bodily or mental changes that tend to improve a person’s well-being. Such a “welfarist account”, however, seems to conflict with moral enhancement: consider an intervention that improves someone’s moral motives but which ultimately diminishes their well-being. According to the welfarist account, this would not be an instance of enhancement—in fact, as I argue, it would count as a disability. This seems to pose a serious limitation for the account. Here, I elaborate on this limitation and argue that, despite it, there is a crucial role for such a welfarist account to play in our practical deliberations about moral enhancement. I do this by exploring four scenarios where a person’s motives are improved at the cost of their well-being. A framework emerges from these scenarios which can clarify disagreements about moral enhancement and help sharpen arguments for and against it.

The article is here.

Tuesday, September 11, 2018

Motivated misremembering: Selfish decisions are more generous in hindsight

Ryan Carlson, Michel Marechal, Bastiaan Oud, Ernst Fehr, and Molly Crockett
Created on: July 22, 2018 | Last edited: July 22, 2018

Abstract

People often prioritize their own interests, but also like to see themselves as moral. How do individuals resolve this tension? One way to both maximize self-interest and maintain a moral self-image is to misremember the extent of one’s selfishness. Here, we tested this possibility. Across three experiments, participants decided how to split money with anonymous partners, and were later asked to recall their decisions. Participants systematically recalled being more generous in the past than they actually were, even when they were incentivized to recall accurately. Crucially, this effect was driven by individuals who gave less than what they personally believed was fair, independent of how objectively selfish they were. Our findings suggest that when people’s actions fall short of their own personal standards, they may misremember the extent of their selfishness, thereby warding off negative emotions and threats to their moral self-image.

The research is here.

Tuesday, September 4, 2018

Belief in God: Why People Believe, and Why They Don’t

Brett Mercier, , Stephanie R. Kramer, Azim F. Shariff
Current Directions in Psychological Science
First Published July 31, 2018

Abstract

Belief in a god or gods is a central feature in the lives of billions of people and a topic of perennial interest within psychology. However, research over the past half decade has achieved a new level of understanding regarding both the ultimate and proximate causes of belief in God. Ultimate causes—the evolutionary influences on a trait—shed light on the adaptive value of belief in God and the reasons why a tendency toward this belief exists in humans. Proximate causes—the immediate influences on the expression of a trait—explain variation and changes in belief. We review this research and discuss remaining barriers to a fuller understanding of belief in God.

The article is here.


Monday, August 6, 2018

Why Should We Be Good?

Matt McManus
Quillette.com
Originally posted July 7, 2018

Here are two excerpts:

The negative motivation arises from moral dogmatism. There are those who wish to dogmatically assert their own values without worrying that they may not be as universal as one might suppose. For instance, this is often the case with religious fundamentalists who worry that secular society is increasingly unmoored from proper values and traditions. Ironically, the dark underside of this moral dogmatism is often a relativistic epistemology. Ethical dogmatists do not want to be confronted with the possibility that it is possible to challenge their values because they often cannot provide good reasons to back them up.

(cut)

These issues are all of considerable philosophical interest. In what follows, I want to press on just one issue that is often missed in debates between those who believe there are universal values, and those who believe that what is ethically correct is relative to either a culture or to the subjective preference of individuals. The issue I wish to explore is this: even if we know which values are universal, why should we feel compelled to adhere to them? Put more simply, even if we know what it is to be good, why should we bother to be good? This is one of the major questions addressed by what is often called meta-ethics.

The information is here.

Tuesday, February 6, 2018

Do the Right Thing: Experimental Evidence that Preferences for Moral Behavior, Rather Than Equity or Efficiency per se, Drive Human Prosociality

Capraro, Valerio and Rand, David G.
(January 11, 2018). Judgment and Decision Making.

Abstract

Decades of experimental research show that some people forgo personal gains to benefit others in unilateral anonymous interactions. To explain these results, behavioral economists typically assume that people have social preferences for minimizing inequality and/or maximizing efficiency (social welfare). Here we present data that are incompatible with these standard social preference models. We use a “Trade-Off Game” (TOG), where players unilaterally choose between an equitable option and an efficient option. We show that simply changing the labelling of the options to describe the equitable versus efficient option as morally right completely reverses the correlation between behavior in the TOG and play in a separate Dictator Game (DG) or Prisoner’s Dilemma (PD): people who take the action framed as moral in the TOG, be it equitable or efficient, are much more prosocial in the DG and PD. Rather than preferences for equity and/or efficiency per se, our results suggest that prosociality in games such as the DG and PD are driven by a generalized morality preference that motivates people to do what they think is morally right.

Download the paper here.

Monday, December 11, 2017

To think critically, you have to be both analytical and motivated

John Timmer
ARS Techica
Originally published November 15, 2017

Here is an excerpt:

One of the proposed solutions to this issue is to incorporate more critical thinking into our education system. But critical thinking is more than just a skill set; you have to recognize when to apply it, do so effectively, and then know how to respond to the results. Understanding what makes a person effective at analyzing fake news and conspiracy theories has to take all of this into account. A small step toward that understanding comes from a recently released paper, which looks at how analytical thinking and motivated skepticism interact to make someone an effective critical thinker.

Valuing rationality

The work comes courtesy of the University of Illinois at Chicago's Tomas Ståhl and Jan-Willem van Prooijen at VU Amsterdam. This isn't the first time we've heard from Ståhl; last year, he published a paper on what he termed "moralizing epistemic rationality." In it, he looked at people's thoughts on the place critical thinking should occupy in their lives. The research identified two classes of individuals: those who valued their own engagement with critical thinking, and those who viewed it as a moral imperative that everyone engage in this sort of analysis.

The information is here.

The target article is here.

Tuesday, December 5, 2017

Liberals and conservatives are similarly motivated to avoid exposure to one another's opinions

Jeremy A. Frimer, Linda J. Skitka, Matt Motyl
Journal of Experimental Social Psychology
Volume 72, September 2017, Pages 1-12

Abstract

Ideologically committed people are similarly motivated to avoid ideologically crosscutting information. Although some previous research has found that political conservatives may be more prone to selective exposure than liberals are, we find similar selective exposure motives on the political left and right across a variety of issues. The majority of people on both sides of the same-sex marriage debate willingly gave up a chance to win money to avoid hearing from the other side (Study 1). When thinking back to the 2012 U.S. Presidential election (Study 2), ahead to upcoming elections in the U.S. and Canada (Study 3), and about a range of other Culture War issues (Study 4), liberals and conservatives reported similar aversion toward learning about the views of their ideological opponents. Their lack of interest was not due to already being informed about the other side or attributable election fatigue. Rather, people on both sides indicated that they anticipated that hearing from the other side would induce cognitive dissonance (e.g., require effort, cause frustration) and undermine a sense of shared reality with the person expressing disparate views (e.g., damage the relationship; Study 5). A high-powered meta-analysis of our data sets (N = 2417) did not detect a difference in the intensity of liberals' (d = 0.63) and conservatives' (d = 0.58) desires to remain in their respective ideological bubbles.

The research is here.

Sunday, November 26, 2017

The Wisdom in Virtue: Pursuit of Virtue Predicts Wise Reasoning About Personal Conflicts

Alex C. Huynh, Harrison Oakes, Garrett R. Shay, & Ian McGregor
Psychological Science
Article first published online: October 3, 2017

Abstract

Most people can reason relatively wisely about others’ social conflicts, but often struggle to do so about their own (i.e., Solomon’s paradox). We suggest that true wisdom should involve the ability to reason wisely about both others’ and one’s own social conflicts, and we investigated the pursuit of virtue as a construct that predicts this broader capacity for wisdom. Results across two studies support prior findings regarding Solomon’s paradox: Participants (N = 623) more strongly endorsed wise-reasoning strategies (e.g., intellectual humility, adopting an outsider’s perspective) for resolving other people’s social conflicts than for resolving their own. The pursuit of virtue (e.g., pursuing personal ideals and contributing to other people) moderated this effect of conflict type. In both studies, greater endorsement of the pursuit of virtue was associated with greater endorsement of wise-reasoning strategies for one’s own personal conflicts; as a result, participants who highly endorsed the pursuit of virtue endorsed wise-reasoning strategies at similar levels for resolving their own social conflicts and resolving other people’s social conflicts. Implications of these results and underlying mechanisms are explored and discussed.

Here is an excerpt:

We propose that the litmus test for wise character is whether one can reason wisely about one’s own social conflicts. As did the biblical King Solomon, people tend to reason more wisely about others’ social conflicts than their own (i.e., Solomon’s paradox; Grossmann & Kross, 2014, see also Mickler & Staudinger, 2008, for a discussion of personal vs. general wisdom). Personal conflicts impede wise reasoning because people are more likely to immerse themselves in their own perspective and emotions, relegating other perspectives out of awareness, and increasing certainty regarding preferred perspectives (Kross & Grossmann, 2012; McGregor, Zanna, Holmes, & Spencer, 2001). In contrast, reasoning about other people’s conflicts facilitates wise reasoning through the adoption of different viewpoints and the avoidance of sociocognitive biases (e.g., poor recognition of one’s own shortcomings—e.g., Pronin, Olivola, & Kennedy, 2008). In the present research, we investigated whether virtuous motives facilitate wisdom about one’s own conflicts, enabling one to pass the litmus test for wise character.

The article is here.

Friday, October 20, 2017

A virtue ethics approach to moral dilemmas in medicine

P Gardiner
J Med Ethics. 2003 Oct; 29(5): 297–302.

Abstract

Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience. Virtue ethics is a framework that focuses on the character of the moral agent rather than the rightness of an action. In considering the relationships, emotional sensitivities, and motivations that are unique to human society it provides a fuller ethical analysis and encourages more flexible and creative solutions than principlism or consequentialism alone. Two different moral dilemmas are analysed using virtue ethics in order to illustrate how it can enhance our approach to ethics in medicine.

A pdf download of the article can be found here.

Note from John: This article is interesting for a myriad of reasons. For me, we ethics educators have come a long way in 14 years.

Tuesday, September 12, 2017

Personal values in human life

Lilach Sagiv, Sonia Roccas, Jan Cieciuch & Shalom H. Schwartz
Nature Human Behaviour (2017)
doi:10.1038/s41562-017-0185-3

Abstract

The construct of values is central to many fields in the social sciences and humanities. The last two decades have seen a growing body of psychological research that investigates the content, structure and consequences of personal values in many cultures. Taking a cross-cultural perspective we review, organize and integrate research on personal values, and point to some of the main findings that this research has yielded. Personal values are subjective in nature, and reflect what people think and state about themselves. Consequently, both researchers and laymen sometimes question the usefulness of personal values in influencing action. Yet, self-reported values predict a large variety of attitudes, preferences and overt behaviours. Individuals act in ways that allow them to express their important values and attain the goals underlying them. Thus, understanding personal values means understanding human behaviour.

Wednesday, August 30, 2017

Fat Shaming in the Doctor's Office Can Be Mentally and Physically Harmful

American Psychological Association
Press Release from August 3, 2017

Medical discrimination based on people’s size and negative stereotypes of overweight people can take a toll on people’s physical health and well-being, according to a review of recent research presented at the 125th Annual Convention of the American Psychological Association.

“Disrespectful treatment and medical fat shaming, in an attempt to motivate people to change their behavior, is stressful and can cause patients to delay health care seeking or avoid interacting with providers,” presenter Joan Chrisler, PhD, a professor of psychology at Connecticut College, said during a symposium titled “Weapons of Mass Distraction — Confronting Sizeism.”

Sizeism can also have an effect on how doctors medically treat patients, as overweight people are often excluded from medical research based on assumptions about their health status, Chrisler said, meaning the standard dosage for drugs may not be appropriate for larger body sizes. Recent studies have shown frequent under-dosing of overweight patients who were prescribed antibiotics and chemotherapy, she added.

“Recommending different treatments for patients with the same condition based on their weight is unethical and a form of malpractice,” Chrisler said. “Research has shown that doctors repeatedly advise weight loss for fat patients while recommending CAT scans, blood work or physical therapy for other, average weight patients.”

In some cases, providers might not take fat patients’ complaints seriously or might assume that their weight is the cause of any symptoms they experience, Chrisler added. “Thus, they could jump to conclusions or fail to run appropriate tests, which results in misdiagnosis,” she said.

The pressor is here.

Monday, August 28, 2017

Death Before Dishonor: Incurring Costs to Protect Moral Reputation

Andrew J. Vonasch, Tania Reynolds, Bo M. Winegard, Roy F. Baumeister
Social Psychological and Personality Science 
First published date: July-21-2017

Abstract

Predicated on the notion that people’s survival depends greatly on participation in cooperative society, and that reputation damage may preclude such participation, four studies with diverse methods tested the hypothesis that people would make substantial sacrifices to protect their reputations. A “big data” study found that maintaining a moral reputation is one of people’s most important values. In making hypothetical choices, high percentages of “normal” people reported preferring jail time, amputation of limbs, and death to various forms of reputation damage (i.e., becoming known as a criminal, Nazi, or child molester). Two lab studies found that 30% of people fully submerged their hands in a pile of disgusting live worms, and 63% endured physical pain to prevent dissemination of information suggesting that they were racist. We discuss the implications of reputation protection for theories about altruism and motivation.

The article is here.

Wednesday, June 28, 2017

A Teachable Ethics Scandal

Mitchell Handelsman
Teaching of Psychology

Abstract

In this article, I describe a recent scandal involving collusion between officials at the American Psychological Association (APA) and the U.S. Department of Defense, which appears to have enabled the torture of detainees at the Guantanamo Bay detention facility. The scandal is a relevant, complex, and engaging case that teachers can use in a variety of courses. Details of the scandal exemplify a number of psychological concepts, including obedience, groupthink, terror management theory, group influence, and motivation. The scandal can help students understand several factors that make ethical decision-making difficult, including stress, emotions, and cognitive factors such as loss aversion, anchoring, framing, and ethical fading. I conclude by exploring some parallels between the current torture scandal and the development of APA’s ethics guidelines regarding the use of deception in research.

The article is here.

Saturday, May 6, 2017

Investigating Altruism and Selfishness Through the Hypothetical Use of Superpowers

Ahuti Das-Friebel, Nikita Wadhwa, Merin Sanil, Hansika Kapoor, Sharanya V.
Journal of Humanistic Psychology 
First published date: April-13-2017
10.1177/0022167817699049

Abstract

Drawing from literature associating superheroes with altruism, this study examined whether ordinary individuals engaged in altruistic or selfish behavior when they were hypothetically given superpowers. Participants were presented with six superpowers—three positive (healing, invulnerability, and flight) and three negative (fear inducement, psychic persuasion, and poison generation). They indicated their desirability for each power, what they would use it for (social benefit, personal gain, social harm), and listed examples of such uses. Quantitative analyses (n = 285) revealed that 94% of participants wished to possess a superpower, and majority indicated using powers for benefiting themselves than for altruistic purposes. Furthermore, while men wanted positive and negative powers more, women were more likely than men to use such powers for personal and social gain. Qualitative analyses of the uses of the powers (n = 524) resulted in 16 themes of altruistic and selfish behavior. Results were analyzed within Pearce and Amato’s model of helping behavior, which was used to classify altruistic behavior, and adapted to classify selfish behavior. In contrast to how superheroes behave, both sets of analyses revealed that participants would hypothetically use superpowers for selfish rather than altruistic purposes. Limitations and suggestions for future research are outlined.

The article is here.

Wednesday, December 7, 2016

Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue

Tomas Ståhl, Maarten P. Zaal, Linda J. Skitka
PLOS One
Published: November 16, 2016

Abstract

In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.

Monday, November 21, 2016

A Theory of Hypocrisy

Eric Schwitzgebel
The Splintered Mind blog
Originally posted on October

Here is an excerpt:

Furthermore, if they are especially interested in the issue, violations of those norms might be more salient and visible to them than for the average person. The person who works in the IRS office sees how frequent and easy it is to cheat on one's taxes. The anti-homosexual preacher sees himself in a world full of gays. The environmentalist grumpily notices all the giant SUVs rolling down the road. Due to an increased salience of violations of the norms they most care about, people might tend to overestimate the frequency of the violations of those norms -- and then when they calibrate toward mediocrity, their scale might be skewed toward estimating high rates of violation. This combination of increased salience of unpunished violations plus calibration toward mediocrity might partly explain why hypocritical norm violations are more common than a purely strategic account might suggest.

But I don't think that's enough by itself to explain the phenomenon, since one might still expect people to tend to avoid conspicuous moral advocacy on issues where they know they are average-to-weak; and even if their calibration scale is skewed a bit high, they might hope to pitch their own behavior especially toward the good side on that particular issue -- maybe compensating by allowing themselves more laxity on other issues.

The blog post is here.