Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Attribution. Show all posts
Showing posts with label Attribution. Show all posts

Thursday, September 21, 2023

The Myth of the Secret Genius

Brian Klaas
The Garden of Forking Path
Originally posted 30 Nov 22

Here are two excepts: 

A recent research study, involving a collaboration between physicists who model complex systems and an economist, however, has revealed why billionaires are so often mediocre people masquerading as geniuses. Using computer modelling, they developed a fake society in which there is a realistic distribution of talent among competing agents in the simulation. They then applied some pretty simple rules for their model: talent helps, but luck also plays a role.

Then, they tried to see what would happen if they ran and re-ran the simulation over and over.

What did they find? The most talented people in society almost never became extremely rich. As they put it, “the most successful individuals are not the most talented ones and, on the other hand, the most talented individuals are not the most successful ones.”

Why? The answer is simple. If you’ve got a society of, say, 8 billion people, there are literally billions of humans who are in the middle distribution of talent, the largest area of the Bell curve. That means that in a world that is partly defined by random chance, or luck, the odds that someone from the middle levels of talent will end up as the richest person in the society are extremely high.

Look at this first plot, in which the researchers show capital/success (being rich) on the vertical/Y-axis, and talent on the horizontal/X-axis. What’s clear is that society’s richest person is only marginally more talented than average, and there are a lot of people who are extremely talented that are not rich.

Then, they tried to figure out why this was happening. In their simulated world, lucky and unlucky events would affect agents every so often, in a largely random pattern. When they measured the frequency of luck or misfortune for any individual in the simulation, and then plotted it against becoming rich or poor, they found a strong relationship.

(cut)

The authors conclude by stating “Our results highlight the risks of the paradigm that we call “naive meritocracy", which fails to give honors and rewards to the most competent people, because it underestimates the role of randomness among the determinants of success.”

Indeed.


Here is my summary:

The myth of the secret genius: The belief that some people are just born with natural talent and that there is nothing we can do to achieve the same level of success.

The importance of hard work: The vast majority of successful people are not geniuses. They are simply people who have worked hard and persevered in the face of setbacks.

The power of luck: Luck plays a role in everyone's success. Some people are luckier than others, and most people do not factor in luck, as well as other external variables, into their assessment.  This bias is another form of the Fundamental Attribution Error.

The importance of networks: Our networks play a big role in our success. We need to be proactive in building relationships with people who can help us achieve our goals.

Saturday, May 27, 2023

Costly Distractions: Focusing on Individual Behavior Undermines Support for Systemic Reforms

Hagmann, D., Liao, Y., Chater, N., & 
Loewenstein, G. (2023, April 22). 

Abstract

Policy challenges can typically be addressed both through systemic changes (e.g., taxes and mandates) and by encouraging individual behavior change. In this paper, we propose that, while in principle complementary, systemic and individual perspectives can compete for the limited attention of people and policymakers. Thus, directing policies in one of these two ways can distract the public’s attention from the other—an “attentional opportunity cost.” In two pre-registered experiments (n = 1,800) covering three high-stakes domains (climate change, retirement savings, and public health), we show that when people learn about policies targeting individual behavior (such as awareness campaigns), they are more likely to themselves propose policies that target individual behavior, and to hold individuals rather than organizational actors responsible for solving the problem, than are people who learned about systemic policies (such as taxes and mandates, Study 1). This shift in attribution of responsibility has behavioral consequences: people exposed to individual interventions are more likely to donate to an organization that educates individuals rather than one seeking to effect systemic reforms (Study 2). Policies targeting individual behavior may, therefore, have the unintended consequence of redirecting attention and attributions of responsibility away from systemic change to individual behavior.

Discussion

Major policy problems likely require a realignment of systemic incentives and regulations, as well as measures aimed at individual behavior change. In practice, systemic reforms have been difficult to implement, in part due to political polarization and in part because concentrated interest groups have lobbied against changes that threaten their profits. This has shifted the focus to individual behavior. The past two decades, in particular, have seen increasing popularity of ‘nudges’: interventions that can influence individual behavior without substantially changing economic incentives (Thaler &Sunstein, 2008). For example, people may be defaulted into green energy plans (Sunstein &Reisch, 2013) or 401(k) contributions (Madrian & Shea, 2001), and restaurants may varywhether they place calorie labels on the left or the right side of the menu (Dallas, Liu, &Ubel, 2019). These interventions have enjoyed tremendous popularity, because they can often be implemented even when opposition to systemic reforms is too large to change economic incentives. Moreover, it has been argued that nudges incur low economic costs, making them extremely cost effective even when the gains are small on an absolute scaleTor & Klick (2022).

In this paper, we document an important and so far unacknowledged cost of such interventions targeting individual behavior, first postulated by Chater and Loewenstein(2022). We show that when people learn about interventions that target individual behavior, they shift their attention away from systemic reforms compared to those who learn about systemic reforms. Across two experiments, we find that this subsequently  affects their attitudes and behaviors. Specifically, they become less likely to propose systemic policy reforms, hold governments less responsible for solving the policy problem, and are less likely to support organizations that seek to promote systemic reform.The findings of this study may not be news to corporate PR specialists. Indeed, as would be expected according to standard political economy considerations (e.g., Stigler,1971), organizations act in a way that is consistent with a belief in this attentional opportunity cost account. Initiatives that have captured the public’s attention, including recycling campaigns and carbon footprint calculators, have been devised by the very organizations that stood to lose from further regulation that might have hurt their bottomline (e.g., bottle bills and carbon taxes, respectively), potentially distracting individual citizens, policymakers, and the wider public debate from systemic changes that are likely to be required to shift substantially away from the status quo.

Sunday, April 23, 2023

Produced and counterfactual effort contribute to responsibility attributions in collaborative tasks

Xiang, Y., Landy, J., et al. (2023, March 8). 
PsyArXiv
https://doi.org/10.31234/osf.io/jc3hk

Abstract

How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator's part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.

General discussion

Responsibility for the outcomes of collaborations is often distributed unevenly. For example, the lead author on a project may get the bulk of the credit for a scientific discovery, the head of a company may  shoulder the blame for a failed product, and the lazier of two friends may get the greater share of blame  for failing to lift a couch.  However, past work has provided conflicting accounts of the computations that drive responsibility attributions in collaborative tasks.  Here, we compared each of these accounts against human responsibility attributions in a simple collaborative task where two agents attempted to lift a box together.  We contrasted seven models that predict responsibility judgments based on metrics proposed in past work, comprising three production-style models (Force, Strength, Effort), three counterfactual-style models (Focal-agent-only, Non-focal-agent-only, Both-agent), and one Ensemble model that combines the best-fitting production- and counterfactual-style models.  Experiment 1a and Experiment 1b showed that theEffort model and the Both-agent counterfactual model capture the data best among the production-style models and the counterfactual-style models, respectively.  However, neither provided a fully adequate fit on their own.  We then showed that predictions derived from the average of these two models (i.e., the Ensemble model) outperform all other models, suggesting that responsibility judgments are likely a combination of production-style reasoning and counterfactual reasoning.  Further evidence came from analyses performed on individual participants, which revealed that he Ensemble model explained more participants’ data than any other model.  These findings were subsequently supported by Experiment 2a and Experiment 2b, which replicated the results when additional force information was shown to the participants, and by Experiment 3, which validated the model predictions with a broader range of stimuli.


Summary: Effort exerted by each member & counterfactual thinking play a crucial role in attributing responsibility for success or failure in collaborative tasks. This study suggests that higher effort leads to more responsibility for success, while lower effort leads to more responsibility for failure.

Sunday, September 11, 2022

Mental control and attributions of blame for negligent wrongdoing

Murray, S., Krasich, K., et al. (2022).
Journal of Experimental Psychology: 
General. Advance online publication.
https://doi.org/10.1037/xge0001262

Abstract

Third-personal judgments of blame are typically sensitive to what an agent knows and desires. However, when people act negligently, they do not know what they are doing and do not desire the outcomes of their negligence. How, then, do people attribute blame for negligent wrongdoing? We propose that people attribute blame for negligent wrongdoing based on perceived mental control, or the degree to which an agent guides their thoughts and attention over time. To acquire information about others’ mental control, people self-project their own perceived mental control to anchor third-personal judgments about mental control and concomitant responsibility for negligent wrongdoing. In four experiments (N = 841), we tested whether perceptions of mental control drive third-personal judgments of blame for negligent wrongdoing. Study 1 showed that the ease with which people can counterfactually imagine an individual being non-negligent mediated the relationship between judgments of control and blame. Studies 2a and 2b indicated that perceived mental control has a strong effect on judgments of blame for negligent wrongdoing and that first-personal judgments of mental control are moderately correlated with third-personal judgments of blame for negligent wrongdoing. Finally, we used an autobiographical memory manipulation in Study 3 to make personal episodes of forgetfulness salient. Participants for whom past personal episodes of forgetfulness were made salient judged negligent wrongdoers less harshly compared with a control group for whom past episodes of negligence were not salient. Collectively, these findings suggest that first-personal judgments of mental control drive third-personal judgments of blame for negligent wrongdoing and indicate a novel role for counterfactual thinking in the attribution of responsibility.

Conclusion

Models  of  blame  attribution  predict  that  judgments  of  blame  for  negligent  wrongdoing  are sensitive to the perceived  capacity of the individual  to  avoid being negligent. In  this paper, we explored two extensions of these models. The first is that people use perceived degree of mental control to inform judgments of blame for negligent wrongdoing. Information about mental control is acquired through self-projection. These results suggest a novel role for counterfactual thinking in attributing blame, namely that counterfactual thinking is the process whereby people self-project to acquire information that is used to inform judgments of blame.

Saturday, April 2, 2022

Race and reactions to women's expressions of anger at work: Examining the effects of the "angry Black woman" stereotype

Motro, D., Evans, J. B., Ellis, A., & Benson, L. 
(2022). The Journal of applied psychology, 
107(1), 142–152.
https://doi.org/10.1037/apl0000884

Abstract

Across two studies (n = 555), we examine the detrimental effects of the "angry black woman" stereotype in the workplace. Drawing on parallel-constraint-satisfaction theory, we argue that observers will be particularly sensitive to expressions of anger by black women due to widely held stereotypes. In Study 1, we examine a three-way interaction among anger, race, and gender, and find that observers are more likely to make internal attributions for expressions of anger when an individual is a black woman, which then leads to worse performance evaluations and assessments of leadership capability. In Study 2, we focus solely on women and expand our initial model by examining stereotype activation as a mechanism linking the effects of anger and race on internal attributions. We replicated findings from Study 1 and found support for stereotype activation as an underlying mechanism. We believe our work contributes to research on race, gender, and leadership, and highlights an overlooked stereotype in the management literature. Theoretical and practical implications are discussed.

(cut)

Conclusion 

Black employees have to overcome a myriad of hurdles at work based on the color of their skin. For black women, our research indicates that there may be additional considerations when identifying biases at work. Anger is an emotion that employees may display in a variety of contexts, often stemming from a
perceived injustice. Bolstered by cultural reinforcement, our studies suggest that the angry black woman stereotype can affect how individuals view displays of anger at work. The angry black woman stereotype represents another hurdle for black women, and we urge future research to expand upon our understanding of the effects of perceptions on black women at work.

Monday, February 14, 2022

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions

Klebl, C., Rhee, J.J., Greenaway, K.H. et al. 
J Nonverbal Behav (2021). 
https://doi.org/10.1007/s10919-021-00388-w

Abstract

Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.

From the Discussion

The present investigation advances the Beauty-is-Good stereotype literature. Our findings are consistent with extensive research showing that people attribute positive traits more strongly to attractive compared to unattractive individuals (Dion et al., 1972). Most significantly, the present studies add to the previous literature by providing evidence that attractiveness does not bias the attribution of positive traits uniformly. Attractiveness especially biases the attribution of moral traits compared to positive non-moral traits, constituting an update to the Beauty-is-Good stereotype. One possible explanation for this selective bias is that because people are particularly motivated to assess socially important traits—traits that help us quickly decide who our allies are (Goodwin et  al., 2014)—physical attractiveness selectively biases the attribution of those traits over socially less important traits. While in many instances, this may allow us to assess moral character quickly and accurately (cf. Ambady et al., 2000) and thus obtain valuable information about whether the target is a threat or ally, where morally relevant information is absent (such as during initial impression formation) this motivation to assess moral character may lead to an over reliance on heuristic cues. 

Tuesday, June 8, 2021

Action and inaction in moral judgments and decisions: ‎Meta-analysis of Omission-Bias omission-commission asymmetries

Jamison, J., Yay, T., & Feldman, G.
Journal of Experimental Social Psychology
Volume 89, July 2020, 103977

Abstract

Omission bias is the preference for harm caused through omissions over harm caused through commissions. In a pre-registered experiment (N = 313), we successfully replicated an experiment from Spranca, Minsk, and Baron (1991), considered a classic demonstration of the omission bias, examining generalizability to a between-subject design with extensions examining causality, intent, and regret. Participants in the harm through commission condition(s) rated harm as more immoral and attributed higher responsibility compared to participants in the harm through omission condition (d = 0.45 to 0.47 and d = 0.40 to 0.53). An omission-commission asymmetry was also found for perceptions of causality and intent, in that commissions were attributed stronger action-outcome links and higher intentionality (d = 0.21 to 0.58). The effect for regret was opposite from the classic findings on the action-effect, with higher regret for inaction over action (d = −0.26 to −0.19). Overall, higher perceived causality and intent were associated with higher attributed immorality and responsibility, and with lower perceived regret.

From the Discussion

Regret: Deviation from the action-effect 

The classic action-effect (Kahneman & Tversky, 1982) findings were that actions leading to a negative outcome are regretted more than inactions leading to the same negative outcomes. We added a regret measure to examine whether the action-effect findings would extend to situations of morality involving intended harmful behavior. Our findings were opposite to the expected action-effect omission-commission asymmetry with participants rating omissions as more regretted than commissions (d = 0.18 to 0.26).  

One explanation for this surprising finding may be an intermingling of the perception of an actors’ regret for their behavior with their regret for the outcome. In typical action-effect scenarios, actors behave in a way that is morally neutral but are faced with an outcome that deviates from expectations, such as losing money over an investment. In this study’s omission bias scenarios, the actors behaved immorally to harm others for personal or interpersonal gain, and then are faced with an outcome that deviates from expectation. We hypothesized that participants would perceive actors as being more regretful for taking action that would immorally harm another person rather than allowing that harm through inaction. Yet it is plausible that participants were focused on the regret that actors would feel for not taking more direct action towards their goal of personal or interpersonal gain.  

Another possible explanation for the regret finding is the side-taking hypothesis (DeScioli, 2016; Descoli & Kurzban, 2013). This states that group members side against a wrongdoer who has performed an action that is perceived morally wrong by also attributing lack of remorse or regret. The negative relationship observed between the positive characteristic of regret and the negative characteristics of immorality, causality, and intentionality is in support of this explanation. Future research may be able to explore the true mechanisms of regret in such scenarios. 

Wednesday, May 20, 2020

People judge others to have more control over beliefs than they themselves do.

Cusimano, C., & Goodwin, G. (2020, April 3).
https://doi.org/10.1037/pspa0000198

Abstract

People attribute considerable control to others over what those individuals believe. However, no work to date has investigated how people judge their own belief control, nor whether such judgments diverge from their judgments of others. We addressed this gap in seven studies and found that people judge others to be more able to voluntarily change what they believe than they themselves are. This occurs when people judge others who disagree with them (Study 1) as well as others agree with them (Studies 2-5, 7), and it occurs when people judge strangers (Studies 1-2, 4-5) as well as close others (Studies 3, 7). It appears not to be explained by impression management or self-enhancement motives (Study 3). Rather, there is a discrepancy between the evidentiary constraints on belief change that people access via introspection, and their default assumptions about the ease of voluntary belief revision. That is, people spontaneously tend to think about the evidence that supports their beliefs, which leads them to judge their beliefs as outside their control. But they apparently fail to generalize this feeling of constraint to others, and similarly fail to incorporate it into their generic model of beliefs (Studies 4-7). We discuss the implications of our findings for theories of ideology-based conflict, actor-observer biases, naïve realism, and on-going debates regarding people’s actual capacity to voluntarily change what they believe.

Conclusion

The  present  paper  uncovers  an  important  discrepancy in  how  people  think  about  their  own  and  others’  beliefs; namely, that people judge that others have a greater capacity to voluntarily change their beliefs than they, themselves do.  Put succinctly, when someone says, “You can choose to believe in God, or you can choose not to believe in God,” they may often mean that you can choose but they cannot.  We have argued that this discrepancy derives from two distinct ways people reason about belief control: either by consulting their default theory of belief, or by introspecting and reporting what they feel when they consider voluntarily changing a belief. When people apply their default theory of belief, they judge  that  they  and  others  have  considerable  control  over what they believe. But, when people consider the possibility of trying to change a particular belief, they tend to report that they have less control. Because people do not have access to the experiences of others, they rely on their generic theory of beliefs when judging others’ control. Discrepant attributions of control for self and other emerge as a result.  This may in turn have important downstream effects on people’s behavior during disagreements. More work is needed to explore these downstream effects, as well as to understand how much control people actually have over what they believe.  Predictably,we find the results from these studies compelling, but admit that readers may believe whatever they please.

The research is here.

Tuesday, November 19, 2019

Moral Responsibility

Talbert, Matthew
The Stanford Encyclopedia of Philosophy 
(Winter 2019 Edition), Edward N. Zalta (ed.)

Making judgments about whether a person is morally responsible for her behavior, and holding others and ourselves responsible for actions and the consequences of actions, is a fundamental and familiar part of our moral practices and our interpersonal relationships.

The judgment that a person is morally responsible for her behavior involves—at least to a first approximation—attributing certain powers and capacities to that person, and viewing her behavior as arising (in the right way) from the fact that the person has, and has exercised, these powers and capacities. Whatever the correct account of the powers and capacities at issue (and canvassing different accounts is the task of this entry), their possession qualifies an agent as morally responsible in a general sense: that is, as one who may be morally responsible for particular exercises of agency. Normal adult human beings may possess the powers and capacities in question, and non-human animals, very young children, and those suffering from severe developmental disabilities or dementia (to give a few examples) are generally taken to lack them.

To hold someone responsible involves—again, to a first approximation—responding to that person in ways that are made appropriate by the judgment that she is morally responsible. These responses often constitute instances of moral praise or moral blame (though there may be reason to allow for morally responsible behavior that is neither praiseworthy nor blameworthy: see McKenna 2012: 16–17 and M. Zimmerman 1988: 61–62). Blame is a response that may follow on the judgment that a person is morally responsible for behavior that is wrong or bad, and praise is a response that may follow on the judgment that a person is morally responsible for behavior that is right or good.

The information is here.

Monday, June 24, 2019

Motivated free will belief: The theory, new (preregistered) studies, and three meta-analyses

Clark, C. J., Winegard, B. M., & Shariff, A. F. (2019).
Manuscript submitted for publication.

Abstract

Do desires to punish lead people to attribute more free will to individual actors (motivated free will attributions) and to stronger beliefs in human free will (motivated free will beliefs) as suggested by prior research? Results of 14 new (7 preregistered) studies (n=4,014) demonstrated consistent support for both of these. These findings consistently replicated in studies (k=8) in which behaviors meant to elicit desires to punish were rated as equally or less counternormative than behaviors in control conditions. Thus, greater perceived counternormativity cannot account for these effects. Additionally, three meta-analyses of the existing data (including eight vignette types and eight free will judgment types) found support for motivated free will attributions (k=22; n=7,619; r=.25, p<.001) and beliefs (k=27; n=8,100; r=.13, p<.001), which remained robust after removing all potential moral responsibility confounds (k=26; n=7,953; r=.12, p<.001). The size of these effects varied by vignette type and free will belief measurement. For example, presenting the FAD+ free will belief subscale mixed among three other subscales (as in Monroe and Ysidron’s [2019] failed replications) produced a smaller average effect size (r=.04) than shorter and more immediate measures (rs=.09-.28). Also, studies with neutral control conditions produced larger effects (Attributions: r=.30; Beliefs: rs=.14-.16) than those with control conditions involving bad actions (Attributions: r=.05; Beliefs: rs=.04-.06). Removing these two kinds of studies from the meta-analyses produced larger average effect sizes (Attributions: r=.28; Beliefs: rs=.17-.18). We discuss the relevance of these findings for past and future research and the significance of these findings for human responsibility.

From the Discussion Section:

We suspect that motivated free will beliefs have become more common as society has become more humane and more concerned about proportionate punishment. Many people now assiduously reflect upon their own society’s punitive practices and separate those who deserve to be punished from those who are incapable of being fully responsible for their actions. Free will is crucial here because it is often considered a prerequisite for moral responsibility (Nichols & Knobe, 2007; Sarkissian et al., 2010; Shariff et al., 2014). Therefore, when one is motivated to punish another person, one is also motivated to inflate free will beliefs and free will attributions to specific perpetrators as a way to justify punishing the person.

A preprint can be downloaded here.

Monday, November 12, 2018

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Tuesday, September 27, 2016

Alexithymia increases moral acceptability of accidental harms

Indrajeet Patil and Giorgia Silani
Journal of Cognitive Psychology, 2014 Vol. 26, No. 5, 597

Abstract

Previous research shows that when people judge moral acceptability of others' harmful behaviour, they not only take into account information about the consequences of the act but also an actor's belief while carrying out the act. A two-process model has been proposed to account for this pattern of moral judgements and posits: (1) a causal process that detects the presence of a harmful outcome and is motivated by empathic aversion stemming from victim suffering; (2) a mental state-based process that attributes beliefs, desires, intentions, etc. to the agent in question and is motivated by imagining personally carrying out harmful actions. One prediction of this model would be that personality traits associated with empathy deficits would find accidental harms more acceptable not because they focus on innocent intentions but because they have reduced concern for the victim's well-being. In this study, we show that one such personality trait, viz. alexithymia, indeed exhibits the predicted pattern and this increased acceptability of accidental harm in alexithymia is mediated by reduced dispositional empathic concern. Results attest to the validity of two-process model of intent-based moral judgements and emphasise key role affective empathy plays in harm-based moral judgements.

The article is here.


Friday, September 2, 2016

But Did They Do It on Purpose?

By Dan Falk
Scientific American
Originally published July 1, 2016

Here is an excerpt:

In all societies, the most severe transgressions draw the harshest judgments, but cultures differ on whether or not intent is weighed heavily in such crimes. One scenario, for example, asked respondents to imagine that someone had poisoned a communal well, harming dozens of villagers. In many nonindustrial societies, this was seen as the most severe wrongdoing—and yet intent seemed to matter very little. The very act of poisoning the well “was judged to be so bad that, whether it was on purpose or accidental, it ‘maxed out’ the badness judgments,” explains lead author H. Clark Barrett of the University of California, Los Angeles. “They accepted that it was accidental but said it's your responsibility to be vigilant in cases that cause that degree of harm.”

The findings also suggest that people in industrial societies are more likely in general than those in traditional societies to consider intent. This, Barrett says, may reflect the fact that people raised in the West are immersed in complex sets of rules; judges, juries and law books are just the tip of the moral iceberg. “In small-scale societies, judgment may be equally sophisticated, but it isn't codified in these elaborate systems,” he notes. “In some of these societies, people argue about moral matters for just as long as they do in any court in the U.S.”

The article is here.

Tuesday, July 19, 2016

When and Why We See Victims as Responsible: The Impact of Ideology on Attitudes Toward Victims

Laura Niemi and Liane Young
Pers Soc Psychol Bull June 23, 2016

Abstract

Why do victims sometimes receive sympathy for their suffering and at other times scorn and blame? Here we show a powerful role for moral values in attitudes toward victims. We measured moral values associated with unconditionally prohibiting harm (“individualizing values”) versus moral values associated with prohibiting behavior that destabilizes groups and relationships (“binding values”: loyalty, obedience to authority, and purity). Increased endorsement of binding values predicted increased ratings of victims as contaminated (Studies 1-4); increased blame and responsibility attributed to victims, increased perceptions of victims’ (versus perpetrators’) behaviors as contributing to the outcome, and decreased focus on perpetrators (Studies 2-3). Patterns persisted controlling for politics, just world beliefs, and right-wing authoritarianism. Experimentally manipulating linguistic focus off of victims and onto perpetrators reduced victim blame. Both binding values and focus modulated victim blame through victim responsibility attributions. Findings indicate the important role of ideology in attitudes toward victims via effects on responsibility attribution.

The article is here.

Friday, June 5, 2015

The thought father: Psychologist Daniel Kahneman on luck

By Richard Godwin
The London Evening Standard
Originally published March 18, 2014

Here are two excerpt:

Through a series of zany experiments involving roulette wheels and loaded dice, Tversky and Kahneman showed just how easily we can be led into making irrational decisions — even judges sentencing criminals were influenced by being shown completely random numbers. They also showed the sinister effects of priming (how, when people are “primed” with images of money, they behave in a more selfish way). Many such mental illusions still have an effect when subjects are explicitly warned to look out for them. “If it feels right, we go along with it,” as Kahneman says. It is usually afterwards that we engage our System 2s if at all, to provide reasons for acting as we did after the fact.

(cut)

Do teach yourself to think long-term. The “focusing illusion” makes the here and now appear the most pressing concern but that can lead to skewed results.

Do be fair. Research shows that employers who are unjust are punished by reduced productivity, and unfair prices lead to a loss in sales.

Do co-operate. What Kahneman calls “bias blindness” means it’s easier to recognise the errors of others than our own so ask for constructive criticism and be prepared to call out others on what they could improve.

The entire article is here.


Tuesday, December 30, 2014

The Dark Side of Free Will

Published on Dec 9, 2014

This talk was given at a local TEDx event, produced independently of the TED Conferences. What would happen if we all believed free will didn't exist? As a free will skeptic, Dr. Gregg Caruso contends our society would be better off believing there is no such thing as free will.

Thursday, December 18, 2014

Value Judgments and the True Self

By George E. Newman, Paul Bloom, & Joshua Knobe
Pers Soc Psychol Bull February 2014 vol. 40 no. 2 203-216

Abstract

The belief that individuals have a “true self” plays an important role in many areas of psychology as well as everyday life. The present studies demonstrate that people have a general tendency to conclude that the true self is fundamentally good—that is, that deep inside every individual, there is something motivating him or her to behave in ways that are virtuous. Study 1 finds that observers are more likely to see a person’s true self reflected in behaviors they deem to be morally good than in behaviors they deem to be bad. Study 2 replicates this effect and demonstrates observers’ own moral values influence what they judge to be another person’s true self. Finally, Study 3 finds that this normative view of the true self is independent of the particular type of mental state (beliefs vs. feelings) that is seen as responsible for an agent’s behavior.

The entire article is here.

Tuesday, December 2, 2014

Attributions to God and Satan About Life-Altering Events.

Ray, Shanna D.; Lockman, Jennifer D.; Jones, Emily J.; Kelly, Melanie H.
Psychology of Religion and Spirituality, Sep 22 , 2014, No Pagination Specified. http://dx.doi.org/10.1037/a0037884

Abstract

When faced with negative life events, people often interpret the events by attributing them to the actions of God or Satan (Lupfer, Tolliver, & Jackson, 1996; Ritzema, 1979). To explore these attributions, we conducted a mixed-method study of Christians who were college freshmen. Participants read vignettes depicting a negative life event that had a beginning and an end that was systematically varied. Participants assigned a larger role to God in vignettes where an initially negative event (e.g., relationship breakup) led to a positive long-term outcome (e.g., meeting someone better) than with a negative (e.g., depression and loneliness) or unspecified long-term outcome. Participants attributed a lesser role to Satan when there was positive outcome rather than negative or unspecified outcome. Participants also provided their own narratives, recounting personal experiences that they attributed to the actions of God or Satan. Participant-supplied narratives often demonstrated “theories” about the actions of God, depicting God as being involved in negative events as a rescuer, comforter, or one who brings positive out of the negative. Satan-related narratives were often lacking in detail or a clear theory of how Satan worked. Participants who did provide this information depicted Satan as acting primarily through influencing one’s thoughts and/or using other people to encourage one’s negative behavior.

The entire article is here.

Monday, June 16, 2014

Good for god? Religious motivation reduces perceived responsibility for and morality of good deeds

By Will M. Gervais
Journal of Experimental Psychology: General, Apr 28 , 2014
doi: 10.1037/a0036678

Abstract

Many people view religion as a crucial source of morality. However, 6 experiments (total N = 1,078) revealed that good deeds are perceived as less moral if they are performed for religious reasons. Religiously motivated acts were seen as less moral than the exact same acts performed for other reasons (Experiments 1–2 and 6). Religious motivations also reduced attributions of intention and responsibility (Experiments 3–6), an effect that fully mediated the effect of religious motivations on perceived morality (Experiment 6). The effects were not explained by different perceptions of motivation orientation (i.e., intrinsic vs. extrinsic) across conditions (Experiment 4) and also were evident when religious upbringing led to an intuitive moral response (Experiment 5). Effects generalized across religious and nonreligious participants. When viewing a religiously motivated good deed, people infer that actually helping others is, in part, a side effect of other motivations rather than an end in itself. Thus, religiously motivated actors are seen as less responsible than secular actors for their good deeds, and their helping behavior is viewed as less moral than identical good deeds performed for either unclear or secular motivations.

The research article is here, behind a paywall.