Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Motivation. Show all posts
Showing posts with label Motivation. Show all posts

Monday, September 12, 2016

Heads or Tails: The Impact of a Coin Toss on Major Life Decisions and Subsequent Happiness

Steven D. Levitt
NBER Working Paper No. 22487
Issued in August 2016

Abstract

Little is known about whether people make good choices when facing important decisions. This paper reports on a large-scale randomized field experiment in which research subjects having difficulty making a decision flipped a coin to help determine their choice. For important decisions (e.g. quitting a job or ending a relationship), those who make a change (regardless of the outcome of the coin toss) report being substantially happier two months and six months later. This correlation, however, need not reflect a causal impact. To assess causality, I use the outcome of a coin toss. Individuals who are told by the coin toss to make a change are much more likely to make a change and are happier six months later than those who were told by the coin to maintain the status quo. The results of this paper suggest that people may be excessively cautious when facing life-changing choices.

The paper is here.

Monday, August 8, 2016

Why You Don’t Know Your Own Mind

By Alex Rosenberg
The New York Times
Originally published July 18, 2016

Here is an excerpt:

In fact, controlled experiments in cognitive science, neuroimaging and social psychology have repeatedly shown how wrong we can be about our real motivations, the justification of firmly held beliefs and the accuracy of our sensory equipment. This trend began even before the work of psychologists such as Benjamin Libet, who showed that the conscious feeling of willing an act actually occurs after the brain process that brings about the act — a result replicated and refined hundreds of times since his original discovery in the 1980s.

Around the same time, a physician working in Britain, Lawrence Weiskrantz, discovered “blindsight” — the ability, first of blind monkeys, and then of some blind people, to pick out objects by their color without the conscious sensation of color. The inescapable conclusion that behavior can be guided by visual information even when we cannot be aware of having it is just one striking example of how the mind is fooled and the ways it fools itself.

The entire article is here.

Wednesday, July 13, 2016

Does moral identity effectively predict moral behavior?: A meta-analysis

Steven G. Hertz and Tobias Krettenauer
Review of General Psychology, Vol 20(2), Jun 2016, 129-140.
http://dx.doi.org/10.1037/gpr0000062

Abstract

This meta-analysis examined the relationship between moral identity and moral behavior. It was based on 111 studies from a broad range of academic fields including business, developmental psychology and education, marketing, sociology, and sport sciences. Moral identity was found to be significantly associated with moral behavior (random effects model, r = .22, p < .01, 95% CI [.19, .25]). Effect sizes did not differ for behavioral outcomes (prosocial behavior, avoidance of antisocial behavior, ethical behavior). Studies that were entirely based on self-reports yielded larger effect sizes. In contrast, the smallest effect was found for studies that were based on implicit measures or used priming techniques to elicit moral identity. Moreover, a marginally significant effect of culture indicated that studies conducted in collectivistic cultures yielded lower effect sizes than studies from individualistic cultures. Overall, the meta-analysis provides support for the notion that moral identity strengthens individuals’ readiness to engage in prosocial and ethical behavior as well as to abstain from antisocial behavior. However, moral identity fares no better as a predictor of moral action than other psychological constructs.

And the conclusion...

Overall, three major conclusions can be drawn from this metaanalysis. First, considering all empirical evidence available it seems impossible to deny that moral identity positively predicts moral behavior in individuals from Western cultures. Although this finding does not refute research on moral hypocrisy, it put the claim that people want to appear moral, rather than be moral into perspective (Batson, 2011; Frimer et al., 2014). If this were always true, why would people who feel that morality matters to them engage more readily in moral action? Second, explicit self-report measures represent a valid and valuable approach to the moral identity construct. This is an important conclusion because many scholars feel that more effort should be invested into developing moral identity measures (e.g., Hardy & Carlo, 2011b; Jennings et al., 2015). Third, although moral identity positively predicts moral behavior the effect is not much stronger than the effects of other constructs, notably moral judgment or moral emotions. Thus, there is no reason to prioritize the moral identity construct as a predictor of moral action at the expense of other factors. Instead, it seems more appropriate to consider moral identity in a broader conceptual framework where it interacts with other personological and situational factors to bring about moral action. This approach is well underway in studies that investigate the moderating and mediating role of moral identity as a predictor of moral action (e.g., Aquino et al., 2007; Hardy et al., 2015). As part of this endeavor, it might become necessary to give up an overly homogenous notion of the moral identity construct in order to acknowledge that moral identities may consist of different motivations and goal orientations. Recently, Krettenauer and Casey (2015) provided evidence for two different types of moral identities, one that is primarily concerned with demonstrating morality to others, and one that is more inwardly defined by being consistent with one's values and beliefs. This differentiation has important ramifications for moral emotions and moral action and helps to explain why moral identities sometimes strengthen individuals' motivation to act morally and sometimes undermine it.

Monday, February 22, 2016

Will Your Ethics Hold Up Under Pressure?

Ron Carucci
Forbes
Originally published FEB 3, 2016

Here is an excerpt:

In an ironic appeal to self-interest, for which Haidt readily acknowledges the paradox, he says there are four important reasons “ethics pays.” First, there is the cost of reputation, which most analysts and experts acknowledge links closely to share price performance. Second, ethical organizations have lower costs of capital, as evidenced by Deutsche Bank’s commitment to focus on clients with higher ethical standards. Third, the white-hot war for talent, both recruiting and retaining top talent, takes a painful hit with an ethical scandal. Conversely, the best talent wants to associate with the best reputed companies. And finally, the astronomical cost of cleaning up an ethical mess can soar into the billions after shareholder losses, lawsuits, fines, and PR costs are added up. Still those aren’t the real reasons to focus on this, claims Haidt. The longer-term benefits to a world with greater ethical substance far outweigh the costs of cutting corners for short-term gains. Sadly, unethical choices have paid well for too many executives.

The article is here.

Wednesday, January 20, 2016

Toward a general theory of motivation: Problems, challenges, opportunities, and the big picture

Roy F. Baumeister
Motivation and Emotion
pp 1-10

Abstract

Motivation theories have tended to focus on specific motivations, leaving open the intellectually and scientifically challenging problem of how to construct a general theory of motivation. The requirements for such a theory are presented here. The primacy of motivation emphasizes that cognition, emotion, agency, and other psychological processes exist to serve motivation. Both state (impulses) and trait (basic drives) forms of motivation must be explained, and their relationship must be illuminated. Not all motivations are the same, and indeed it is necessary to explain how motivation evolved from the simple desires of simple animals into the complex, multifaceted forms of human motivation. Motivation responds to the local environment but may also adapt to it, such as when desires increase after satiation or diminish when satisfaction is chronically unavailable. Addiction may be a special case of motivation—but perhaps it is much less special or different than prevailing cultural stereotypes suggest. The relationship between liking and wanting, and the self-regulatory management of motivational conflict, also require explanation by an integrative theory.

The paper is here. 

Friday, August 7, 2015

How Evolution Illuminates the Human Condition

The Wright Show - Meaning TV
Robert Wright and David Sloan Wilson
Originally posted July 19, 2015

Robert Wright and David Sloan Wilson discuss evolution, biology, psychology, religion, culture, science, values, beliefs, meaning, altruism, motivation, groupishness, and group strength.




Wednesday, July 15, 2015

Approach and avoidance in moral psychology: Evidence for three distinct motivational levels

James F.M. Cornwell and E. Tory Higgins
Personality and Individual Differences
Volume 86, November 2015, Pages 139–149

Abstract

During the past two decades, the science of motivation has made major advances by going beyond just the traditional division of motivation into approaching pleasure and avoiding pain. Recently, motivation has been applied to the study of human morality, distinguishing between prescriptive (approach) morality on the one hand, and proscriptive (avoidance) morality on the other, representing a significant advance in the field. There has been some tendency, however, to subsume all moral motives under those corresponding to approach and avoidance within morality, as if one could proceed with a “one size fits all” perspective. In this paper, we argue for the unique importance of each of three different moral motive distinctions, and provide empirical evidence to support their distinctiveness. The usefulness of making these distinctions for the case of moral and ethical motivation is discussed.

Highlights

• We investigate the relations among three motivational constructs.
• We find that the three constructs are statistically independent.
• We find independent relations between the constructs and moral emotions.
• We find independent relations between the constructs and personal values.

The entire article is here.

Monday, June 15, 2015

Understanding ordinary unethical behavior: why people who value morality act immorally

by Francesca Gino
Current Opinion in Behavioral Sciences
Volume 3, June 2015, Pages 107–111

Cheating, deception, organizational misconduct, and many other forms of unethical behavior are among the greatest challenges in today's society. As regularly highlighted by the media, extreme cases and costly scams (e.g., Enron, Bernard Madoff) are common. Yet, even more frequent and pervasive are cases of ‘ordinary’ unethical behavior — unethical actions committed by people who value about morality but behave unethically when faced with an opportunity to cheat. A growing body of research in behavioral ethics and moral psychology shows that even good people (i.e., people who care about being moral) can and often do bad things. Examples include cheating on taxes, deceiving in interpersonal relationships, overstating performance and contributions to teamwork, inflating business expense reports, and lying in negotiations.

When considered cumulatively, ordinary unethical behavior causes considerable societal damage. For instance, employee theft causes U.S. companies to lose approximately $52 billion per year [4]. This empirical evidence is striking in light of social–psychological research that, for decades, has robustly shown that people typically value honesty, believe strongly in their own morality, and strive to maintain a positive self-image as moral individuals.

The entire article is here.

Friday, May 15, 2015

Why Be Good? Well, Why Not?

By Jay L. Garfield
Big Ideas at Slate.com

Here is an excerpt:

The central problem of ethics is to provide reasons to override rational self-interest—acting for the sake of others, perhaps, or for the sake of duty, or in accordance with divine commandment, or for the sake of some other transcendent value. Sometimes the argument for doing so involves showing that it is really in our own self-interest to do so (everlasting life in heaven, for instance). Sometimes it involves arguing that there are more important things than our own rational interest (duty, for instance). In any case, the burden of proof is taken to rest squarely on the moralist to convince the immoralist to do what is, at least at first glance, irrational.

But why take acting in one’s own narrow self-interest to be rational in the first place? It is not self-evident that it is. And why take our own interests to be either independent of those of others or in competition with them? That is not self-evident, either. If we can offer a more compelling account of rational choice than that offered by the economists and decision theorists, we might find that care for others is the default rational basis for action, not a value in competition with it.

The entire article is here.

Friday, May 8, 2015

How Goodness Arises from Evolutionary Competition

By Martin A. Nowak
Big Ideas from Slate.com

Here is an excerpt:

In the human sphere, cooperation means helping each other. In some contexts cooperation can imply “being good.” And suddenly the conundrum disappears. The moral imperative of world religions and philosophical systems seems to make sense.  It simply asks us to be true to our cooperative heritage, to cooperate and not only to compete.

The evolutionary process among humans is not only genetic but also cultural. We have language. We write books, articles, and emails, come up with ideas, replicate knowledge. A group of humans learning from each other instantiate a cultural evolutionary process with mutation and selection. And cooperation.

What makes cooperation a possible strategy among humans? The answer is repetition and reputation. Most of our crucial social interactions occur repeatedly with the same people or in situations where we are known, where actions can be observed by others, and thus affect our reputation.

The entire piece is here.

Friday, March 20, 2015

Can violence be moral?

Intuitively, we might think that any sort of violent act is immoral.

By David Nussbaum and Séamus A Power
The Guardian
Originally posted February 28, 2015

Here is an excerpt:

Generally speaking, we think of most interpersonal violence, not just terrorist attacks, as immoral. It’s very rare that you’ll see anybody claim that hurting someone else is an inherently moral thing to do. When people are violent, explanations for their behavior tend to invoke some sort of breakdown: a lack of self-control, the dehumanization of an “outgroup,” or perhaps sadistic psychological tendencies.

This is a comforting notion – one that draws a clear boundary between acceptable and unacceptable behavior. But according to the authors of a new book, it simply isn’t an accurate reflection of how people actually behave: morality, as understood and practiced by real-world human beings, doesn’t always prohibit violence. In fact they make the case that most violence is motivated by morality.

The entire article is here.

Monday, March 2, 2015

Moral Realism

Sayre-McCord, Geoff, "Moral Realism"
The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.)

Here is an excerpt:

Nonetheless, realists and anti-realists alike are usually inclined to hold that Moore’s Open Question Argument is getting at something important—some feature of moral claims that makes them not well captured by nonmoral claims.

According to some, that ‘something important’ is that moral claims are essentially bound up with motivation in a way that nonmoral claims are not (Ayer 1936, Stevenson 1937, Gibbard 1990, Blackburn 1993). Exactly what the connection to motivation is supposed to be is itself controversial, but one common proposal (motivation internalism) is that a person counts as sincerely making a moral claim only if she is motivated appropriately. To think of something that it is good, for instance, goes with being, other things equal, in favor of it in ways that would provide some motivation (not necessarily decisive) to promote, produce, preserve or in other ways support it. If someone utterly lacks such motivations and yet claims nonetheless that she thinks the thing in question is good, there is reason, people note, to suspect either that she is being disingenuous or that she does not understand what she is saying. This marks a real contrast with nonmoral claims since the fact that a person makes some such claim sincerely seems never to entail anything in particular about her motivations. Whether she is attracted by, repelled by, or simply indifferent to some color is irrelevant to whether her claim that things have that color are sincere and well understood by her.

The entire entry is here.

Editor's Note: This article is for those psychologists more inclined to read philosophy.

Wednesday, December 10, 2014

"How Do You Change People's Minds About What Is Right And Wrong?"

By David Rand
Edge Video
Originally posted November 18, 2014

I'm a professor of psychology, economics and management at Yale. The thing that I'm interested in, and that I spend pretty much all of my time thinking about, is cooperation—situations where people have the chance to help others at a cost to themselves. The questions that I'm interested in are how do we explain the fact that, by and large, people are quite cooperative, and even more importantly, what can we do to get people to be more cooperative, to be more willing to make sacrifices for the collective good?

There's been a lot of work on cooperation in different fields, and certain basic themes have emerged, what you might call mechanisms for promoting cooperation: ways that you can structure interactions so that people learn to cooperate. In general, if you imagine that most people in a group are doing the cooperative thing, paying costs to help the group as a whole, but there's some subset that's decided "Oh, we don't feel like it; we're just going to look out for ourselves," the selfish people will be better off. Then, either through an evolutionary process or an imitation process, that selfish behavior will spread.

The entire video and transcript is here.

Friday, November 14, 2014

Empathy: A motivated account

Jamil Zaki
Department of Psychology, Stanford University
IN PRESS at Psychological Bulletin

ABSTRACT

Empathy features a tension between automaticity and context dependency. On the one hand, people often take on each other’s states reflexively and outside of awareness. On the other hand, empathy exhibits deep context dependence, shifting with characteristics of empathizers and situations. These two characteristics of empathy can be reconciled by acknowledging the key role of motivation in driving people to avoid or approach engagement with others’ emotions. In particular, at least three motives—suffering, material costs, and interference with competition—drive people to avoid empathy, and at least three motives—positive affect, affiliation, and social desirability—drive them to approach empathy. Would-be empathizers carry out these motives through regulatory strategies including situation selection, attentional modulation, and appraisal, which alter the course of empathic episodes. Interdisciplinary evidence highlights the motivated nature of empathy, and a motivated model holds wide-ranging implications for basic theory, models of psychiatric illness, and intervention efforts to maximize empathy.

The entire article is here.

Monday, November 3, 2014

The Value of Vengeance and the Demand for Deterrence.

Molly J. Crockett, Yagiz Özdemir, and Ernst Fehr
Journal of Experimental Psychology: General, Online First Publication, October 6, 2014

Abstract

Humans will incur costs to punish others who violate social norms. Theories of justice highlight 2 motives for punishment: a forward-looking deterrence of future norm violations and a backward-looking retributive desire to harm. Previous studies of costly punishment have not isolated how much people are willing to pay for retribution alone, because typically punishment both inflicts damage (satisfying the retributive motive) and communicates a norm violation (satisfying the deterrence motive). Here, we isolated retributive motives by examining how much people will invest in punishment when the punished individual will never learn about the punishment. Such “hidden” punishment cannot deter future norm violations but was nevertheless frequently used by both 2nd-party victims and 3rd-party observers of norm violations, indicating that retributive motives drive punishment decisions independently from deterrence goals. While self-reports of deterrence motives correlated with deterrence-related punishment behavior, self-reports of retributive motives did not correlate with retributive punishment behavior. Our findings reveal a preference for pure retribution that can lead to punishment without any social benefits.

The entire article is here, behind a paywall.

Friday, October 24, 2014

When do people cooperate? The neuroeconomics of prosocial decision making.

Declerck CH, Boone C, Emonds G. When do people cooperate? The neuroeconomics
of prosocial decision making. Brain Cogn. 2013 Feb;81(1):95-117. 
doi: 10.1016/j.bandc.2012.09.009.

Abstract

Understanding the roots of prosocial behavior is an interdisciplinary research endeavor that has generated an abundance of empirical data across many disciplines. This review integrates research findings from different fields into a novel theoretical framework that can account for when prosocial behavior is likely to occur. Specifically, we propose that the motivation to cooperate (or not), generated by the reward system in the brain (extending from the striatum to the ventromedial prefrontal cortex), is modulated by two neural networks: a cognitive control system (centered on the lateral prefrontal cortex) that processes extrinsic cooperative incentives, and/or a social cognition system (including the temporo-parietal junction, the medial prefrontal cortex and the amygdala) that processes trust and/or threat signals. The independent modulatory influence of incentives and trust on the decision to cooperate is substantiated by a growing body of neuroimaging data and reconciles the apparent paradox between economic versus social rationality in the literature, suggesting that we are in fact wired for both. Furthermore, the theoretical framework can account for substantial behavioral heterogeneity in prosocial behavior. Based on the existing data, we postulate that self-regarding individuals (who are more likely to adopt an economically rational strategy) are more responsive to extrinsic cooperative incentives and therefore rely relatively more on cognitive control to make (un)cooperative decisions, whereas other-regarding individuals (who are more likely to adopt a socially rational strategy) are more sensitive to trust signals to avoid betrayal and recruit relatively more brain activity in the social cognition system. Several additional hypotheses with respect to the neural roots of social preferences are derived from the model and suggested for future research.

(cut)

6. Concluding remarks and directions for future research
Prosociality includes a wide array of behavior, including mutual cooperation, pure altruism, and the costly act of punishing norm violators. Neurologically, these behaviors are all motivated by neural networks dedicated to reward, indicating that prosocial acts (such as cooperating in a social dilemma) are carried out because they were desired and feel good. However, the underlying reasons for the pleasant feelings associated with cooperative behavior may differ. First, cooperation may be valued because of accruing benefits, making it economically rational. This route to cooperation is made possible through brain regions in the lateral frontal cortex that generate cognitive control and process the presence or absence of extrinsic cooperative incentives. Second, consistent with proponents of social rationality, cooperation can also occur when people expect to experience reward through a “warm glow of giving.” Such intrinsically motivated cooperation yields collective benefits from which all group members may eventually benefit, but it can only be sustained when it exists in concert with a mechanism to detect and deter free-riding. Hence socially rational cooperation is facilitated by a neural network dedicated to social cognition that processes trust signals.

Wednesday, September 10, 2014

Morality and the Religious Mind: Why Theists and Nontheists Differ

By Azim Shariff, Jared Piazza, and Stephanie R. Kramer
Science and Society

Religions have come to be intimately tied to morality and much recent research has shown that theists and nontheists differ in their moral behavior and decision making along several dimensions.  Here we discuss how these empirical trends can be explained by fundamental differences in group commitment, motivations for pro-sociality, cognitive styles, and meta-ethics. We conclude that by elucidating key areas of moral congruence.

The entire article is here.

Thursday, August 14, 2014

Why Can’t the Banking Industry Solve Its Ethics Problems?

By Neil Irwin
The New York Times
Originally published July 29, 2014

The financial crisis that nearly brought down the global economy was triggered in no small part by the aggressive culture and spotty ethics within the world’s biggest banks. But after six years and countless efforts to reform finance, the banking scandals never seem to end.

The important question that doesn’t yet have a satisfying answer is why.

Why are the ethical breaches at megabanks so routine that it is hard to keep them straight? Why do banks seem to have so many scandals — and ensuing multimillion dollar legal settlements — compared with other large companies like retailers, airlines or manufacturers?

The entire story is here.

Monday, June 16, 2014

Good for god? Religious motivation reduces perceived responsibility for and morality of good deeds

By Will M. Gervais
Journal of Experimental Psychology: General, Apr 28 , 2014
doi: 10.1037/a0036678

Abstract

Many people view religion as a crucial source of morality. However, 6 experiments (total N = 1,078) revealed that good deeds are perceived as less moral if they are performed for religious reasons. Religiously motivated acts were seen as less moral than the exact same acts performed for other reasons (Experiments 1–2 and 6). Religious motivations also reduced attributions of intention and responsibility (Experiments 3–6), an effect that fully mediated the effect of religious motivations on perceived morality (Experiment 6). The effects were not explained by different perceptions of motivation orientation (i.e., intrinsic vs. extrinsic) across conditions (Experiment 4) and also were evident when religious upbringing led to an intuitive moral response (Experiment 5). Effects generalized across religious and nonreligious participants. When viewing a religiously motivated good deed, people infer that actually helping others is, in part, a side effect of other motivations rather than an end in itself. Thus, religiously motivated actors are seen as less responsible than secular actors for their good deeds, and their helping behavior is viewed as less moral than identical good deeds performed for either unclear or secular motivations.

The research article is here, behind a paywall.

Thursday, April 24, 2014

How We Hope: A Moral Psychology

Adrienne M. Martin, How We Hope: A Moral Psychology, Princeton University Press, 2014
ISBN 9780691151526.

Reviewed by Erica Lucast Stonestreet, College of St. Benedict/St. John’s University

Adrienne Martin’s book is a detailed analysis of an ordinary phenomenon that has not had much attention in recent moral psychology. The account extends the “orthodox” view of hope (as a desire for an outcome together with a belief in the outcome’s possibility) by adding what Martin calls an “incorporation” element: what distinguishes hope from other attitudes is the hopeful person’s incorporating the desire into her agency as a reason for hopeful activities. Her treatment seriously engages many historical and contemporary views of hope, ultimately aligning most closely with Kantian ideas of moral psychology.

The entire book review is here.