Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Norms. Show all posts
Showing posts with label Norms. Show all posts

Monday, December 14, 2020

Should you save the more useful? The effect of generality on moral judgments about rescue and indirect effects

Caviola, L., Schubert, S., & Mogensen, A. 
(2020, October 23). 

Abstract

Across eight experiments (N = 2,310), we studied whether people would prioritize rescuing individuals who may be thought to contribute more to society. We found that participants were generally dismissive of general rules that prioritize more socially beneficial individuals, such as doctors instead of unemployed people. By contrast, participants were more supportive of one-off decisions to save the life of a more socially beneficial individual, even when such cases were the same as those covered by the rule. This generality effect occurred robustly even when controlling for various factors. It occurred when the decision-maker was the same in both cases, when the pairs of people differing in the extent of their indirect social utility was varied, when the scenarios were varied, when the participant samples came from different countries, and when the general rule only covered cases that are exactly the same as the situation described in the one-off condition. The effect occurred even when the general rule was introduced via a concrete precedent case. Participants’ tendency to be more supportive of the one-off proposal than the general rule was significantly reduced when they evaluated the two proposals jointly as opposed to separately. Finally, the effect also occurred in sacrificial moral dilemmas, suggesting it is a more general phenomenon in certain moral contexts. We discuss possible explanations of the effect, including concerns about negative consequences of the rule and a deontological aversion against making difficult trade-off decisions unless they are absolutely necessary.

General Discussion

Across our studies we found evidence for a generality effect: participants were more supportive of a proposal to prioritize people who are more beneficial to society than others if this applies to a concrete one-off situation than if it describes a general rule. The effect showed robustly even when controlling for various factors. It occurred even when the decision-maker was the same in both cases (Study 2), when the pairs of people differing in the extent of their indirect social utility was varied (Study 3), when the scenarios were varied (Study 3, Study 6), when the participant samples came from different countries (Study 3), and when the rule only entails cases that are exactly the same as the one-off case (Study 6). The effect also occurred when the general rule was introduced via a concrete precedent case (Study 4 and 6). The tendency to be more supportive of the one-off proposal than the general rule was significantly reduced when participants evaluated the two proposals jointly as opposed to separately (Study 7). Finally, we found that the effect also occurs in sacrificial moral dilemmas (Study 8), suggesting that it is a more general phenomenon in moral contexts.

Sunday, November 22, 2020

The logic of universalization guides moral judgment

Levine, S., et al.
PNAS October 20, 2020 
117 (42) 26158-26169; 
first published October 2, 2020; 

Abstract

To explain why an action is wrong, we sometimes say, “What if everybody did that?” In other words, even if a single person’s behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and report comparable patterns of judgment in children. We conclude that, alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.

Significance

Humans have several different ways to decide whether an action is wrong: We might ask whether it causes harm or whether it breaks a rule. Moral psychology attempts to understand the mechanisms that underlie moral judgments. Inspired by theories of “universalization” in moral philosophy, we describe a mechanism that is complementary to existing approaches, demonstrate it in both adults and children, and formalize a precise account of its cognitive mechanisms. Specifically, we show that, when making judgments in novel circumstances, people adopt moral rules that would lead to better consequences if (hypothetically) universalized. Universalization may play a key role in allowing people to construct new moral rules when confronting social dilemmas such as voting and environmental stewardship.

Thursday, October 8, 2020

Humans display a ‘cooperative phenotype’ that is domain general and temporally stable

Peysakhovich, A., Nowak, M. & Rand, D.
Nat Commun 5, 4939 (2014).
https://doi.org/10.1038/ncomms5939

Abstract

Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behaviour in one cooperative context is related to their behaviour in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination towards paying costs to benefit others, which we dub the ‘cooperative phenotype’.

From the Discussion

Here we have presented a range of evidence in support of a ‘cooperative phenotype’: cooperation in anonymous, one-shot economic games reflects an inclination to help others that has a substantial degree of domain generality and temporal stability. The desire to pay costs to benefit others, so central to theories of the evolution and maintenance of cooperation, is psychologically relevant and can be studied using economic games. Furthermore, our data suggest that norm-enforcing punishment and competition may not be part of this behavioral profile: the cooperative phenotype appears to be particular to cooperation.

Phenotypes are displayed characteristics, produced by the interaction of genes and environment. Though we have shown evidence of the existence (and boundaries) of the cooperative phenotype, our experiments do not illuminate whether cooperators are born or made (or something in between). Previous work has shown that cooperation varies substantially across cultures, and is influenced by previous experience, indicating an environmental contribution. On the other hand, a substantial heritable component of cooperative preferences has also been demonstrated, as well as substantial prosocial behaviour and preferences among babies and young children. The ‘phenotypic assay’ for cooperation offered by economic games provides a powerful tool for future researchers to illuminate this issue, teasing apart the building blocks of the cooperative phenotype.

The research is here.

Wednesday, September 23, 2020

Do Conflict of Interest Disclosures Facilitate Public Trust?

D. M. Cain, & M. Banker
AMA J Ethics. 2020;22(3): E232-238.
doi: 10.1001/amajethics.2020.232.

Abstract

Lab experiments disagree on the efficacy of disclosure as a remedy to conflicts of interest (COIs). Some experiments suggest that disclosure has perverse effects, although others suggest these are mitigated by real-world factors (eg, feedback, sanctions, norms). This article argues that experiments reporting positive effects of disclosure often lack external validity: disclosure works best in lab experiments that make it unrealistically clear that the one disclosing is intentionally lying. We argue that even disclosed COIs remain dangerous in settings such as medicine where bias is often unintentional rather than the result of intentional corruption, and we conclude that disclosure might not be the panacea many seem to take it to be.

Introduction

While most medical professionals have the best intentions, conflicts of interest (COIs) can unintentionally bias their advice. For example, physicians might have consulting relationships with a company whose product they might prescribe. Physicians are increasingly required to limit COIs and disclose any that exist. When regulators decide whether to let a COI stand, the question becomes: How well does disclosure work? This paper reviews laboratory experiments that have had mixed results on the effects of disclosing COIs on bias and suggests that studies purporting to provide evidence of the efficacy of disclosure often lack external validity. We conclude that disclosure works more poorly than regulators hope; thus, COIs are more problematic than expected.

The info is here.

Thursday, September 17, 2020

Sensitivity to Ingroup and Outgroup Norms in the Association Between Commonality and Morality

M. R.Goldring & L. Heiphetz
Journal of Experimental Social Psychology
Volume 91, November 2020, 104025

Abstract

Emerging research suggests that people infer that common behaviors are moral and vice versa.
The studies presented here investigated the role of group membership in inferences regarding
commonality and morality. In Study 1, participants expected a target character to infer that
behaviors that were common among their ingroup were particularly moral. However, the extent
to which behaviors were common among the target character’s outgroup did not influence
expectations regarding perceptions of morality. Study 2 reversed this test, finding that
participants expected a target character to infer that behaviors considered moral among their
ingroup were particularly common, regardless of how moral their outgroup perceived those
behaviors to be. While Studies 1-2 relied on fictitious behaviors performed by novel groups,
Studies 3-4 generalized these results to health behaviors performed by members of different
racial groups. When answering from another person’s perspective (Study 3) and from their own
perspective (Study 4), participants reported that the more common behaviors were among their
ingroup, the more moral those behaviors were. This effect was significantly weaker for
perceptions regarding outgroup norms, although outgroup norms did exert some effect in this
real-world context. Taken together, these results highlight the complex integration of ingroup
and outgroup norms in socio-moral cognition.

A pdf of the article can be found here.

In sum: Actions that are common among the ingroup are seen as particularly moral.  But actions that are common among the outgroup have little bearing on our judgments of morality.

Wednesday, September 9, 2020

Hate Trumps Love: The Impact of Political Polarization on Social Preferences

Eugen Dimant
ssrn.com
Published 4 September 20

Abstract

Political polarization has ruptured the fabric of U.S. society. The focus of this paper is to examine various layers of (non-)strategic decision-making that are plausibly affected by political polarization through the lens of one's feelings of hate and love for Donald J. Trump. In several pre-registered experiments, I document the behavioral-, belief-, and norm-based mechanisms through which perceptions of interpersonal closeness, altruism, and cooperativeness are affected by polarization, both within and between political factions. To separate ingroup-love from outgroup-hate, the political setting is contrasted with a minimal group setting. I find strong heterogeneous effects: ingroup-love occurs in the perceptional domain (how close one feels towards others), whereas outgroup-hate occurs in the behavioral domain (how one helps/harms/cooperates with others). In addition, the pernicious outcomes of partisan identity also comport with the elicited social norms. Noteworthy, the rich experimental setting also allows me to examine the drivers of these behaviors, suggesting that the observed partisan rift might be not as forlorn as previously suggested: in the contexts studied here, the adverse behavioral impact of the resulting intergroup conflict can be attributed to one's grim expectations about the cooperativeness of the opposing faction, as opposed to one's actual unwillingness to cooperate with them.

From the Conclusion and Discussion

Along all investigated dimensions, I obtain strong effects and the following results: for one, polarization produces ingroup/outgroup differentiation in all three settings (nonstrategic, Experiment 1; strategic, Experiment 2; social norms, Experiment 3), leading participants to actively harm and cooperate less with participants from the opposing faction. For another, lack of cooperation is not the result of a categorical unwillingness to cooperate across factions, but based on one’s grim expectations about the other’s willingness to cooperate. Importantly, however, the results also cast light on the nuance with which ingroup-love and outgroup-hate – something that existing literature often takes as being two sides of the same coin – occurs. In particular, by comparing behavior between the Trump Prime and minimal group prime treatments, the results suggest that ingroup-love can be observed in terms of feeling close to one another, whereas outgroup hate appears in form of taking money away from and being less cooperative with each other. The elicited norms are consistent with these observations and also point out that those who love Trump have a much weaker ingroup/outgroup differentiation than those who hate Trump do.

Sunday, August 23, 2020

Suckers or Saviors? Consistent Contributors in Social Dilemmas

Weber JM, Murnighan JK.
J Pers Soc Psychol. 2008;95(6):1340-1353.
doi:10.1037/a0012454

Abstract

Groups and organizations face a fundamental problem: They need cooperation but their members have incentives to free ride. Empirical research on this problem has often been discouraging, and economic models suggest that solutions are unlikely or unstable. In contrast, the authors present a model and 4 studies that show that an unwaveringly consistent contributor can effectively catalyze cooperation in social dilemmas. The studies indicate that consistent contributors occur naturally, and their presence in a group causes others to contribute more and cooperate more often, with no apparent cost to the consistent contributor and often gain. These positive effects seem to result from a consistent contributor's impact on group members' cooperative inferences about group norms.

From the Discussion:

Practical Implications

These findings may also have important practical implications.Should an individual who is joining a new group take the risk and be a CC (Consistent Contributor)? The alternative is to risk being in a group without one.Even though CCs seemed to benefit economically from their actions, they also tended to get relatively little credit for their positive influence, if they got any credit at all. Thus, future research might explore how consistent contributions can be encouraged and appreciated and how people can overcome the fears that are naturally associated with becoming a CC.

These data also provide further support for Kelley and Stahelski’s (1970) observation that people consistently under estimate their roles in creating their own social environments. In particular, in the contexts that we studied here, the common characterization of self-interested choices as “strategic” or “rational” appears to be behaviorally inappropriate. Characterizing CCs as suckers may be both misleading and fallacious (see Moore & Loewenstein, 2004,p. 200).  If “rational” choices maximize personal outcomes, our data suggest that the choice to be a CC can actually be rational. In this research, we examined CCs’ effects, not their motives or strategies. The data suggest that in these kinds of groups, CCs are saviors rather than suckers.

A serious impediment to the emergence of CCs is the fact that like Axelrod’s (1984) tit-for-tat players, CCs can never do better than the other members of their own groups. This means that CCs cannot do better than their exchange partners: Anyone who cooperates less, even if they ultimately move to mutual cooperation, will obtain better short-term outcomes than CCs. The common tendency to make social comparisons (Festinger, 1954) means that these outcome disparities will probably be noticed. Relatively disadvantageous outcomes are particularly noxious (e.g., Loewenstein, Thompson, & Bazerman, 1989), as is feeling exploited (e.g., Kelley & Stahelski, 1970). Thus, in the absence of formal agreements and binding contracts (which have their own problems; Malhotra & Murnighan, 2002), cooperative action can be exploited. The inclination to self-interested action may even be a common default (Moore & Loewenstein, 2004).

------------------

In essence: Economic theories often assume people look out mostly for themselves, cooperating only when punished or cajoled.  But even in anonymous experiments, some people consistently cooperate. These people also (i) perform better and (ii) inspire others to cooperate.

Friday, July 24, 2020

Developing judgments about peers' obligation to intervene

Marshall, J., Mermin-Bunnell, K, & Bloom, P.
Cognition
Volume 201, August 2020, 104215

Abstract

In some contexts, punishment is seen as an obligation limited to authority figures. In others, it is also a responsibility of ordinary citizens. In two studies with 4- to 7-year-olds (n = 232) and adults (n = 76), we examined developing judgments about whether certain individuals, either authority figures or peers, are obligated to intervene (Study 1) or to punish (Study 2) after witnessing an antisocial action. In both studies, children and adults judged authority figures as obligated to act, but only younger children judged ordinary individuals as also obligated to do so. Taken together, the present findings suggest that younger children, at least in the United States, start off viewing norm enforcement as a universal responsibility, entrusting even ordinary citizens with a duty to intervene in response to antisocial individuals. Older children and adults, though, see obligations as role-dependent—only authority figures are obligated to intervene.

The research is here.

Tuesday, July 7, 2020

Can COVID-19 re-invigorate ethics?

Louise Campbell
BMJ Blogs
Originally posted 26 May 20

The COVID-19 pandemic has catapulted ethics into the spotlight.  Questions previously deliberated about by small numbers of people interested in or affected by particular issues are now being posed with an unprecedented urgency right across the public domain.  One of the interesting facets of this development is the way in which the questions we are asking now draw attention, not just to the importance of ethics in public life, but to the very nature of ethics as practice, namely ethics as it is applied to specific societal and environmental concerns.

Some of these questions which have captured the public imagination were originally debated specifically within healthcare circles and at the level of health policy: what measures must be taken to prevent hospitals from becoming overwhelmed if there is a surge in the number of people requiring hospitalisation?  How will critical care resources such as ventilators be prioritised if need outstrips supply?  In a crisis situation, will older people or people with disabilities have the same opportunities to access scarce resources, even though they may have less chance of survival than people without age-related conditions or disabilities?  What level of risk should healthcare workers be expected to assume when treating patients in situations in which personal protective equipment may be inadequate or unavailable?   Have the rights of patients with chronic conditions been traded off against the need to prepare the health service to meet a demand which to date has not arisen?  Will the response to COVID-19 based on current evidence compromise the capacity of the health system to provide routine outpatient and non-emergency care to patients in the near future?

Other questions relate more broadly to the intersection between health and society: how do we calculate the harms of compelling entire populations to isolate themselves from loved ones and from their communities?  How do we balance these harms against the risks of giving people more autonomy to act responsibly?  What consideration is given to the fact that, in an unequal society, restrictions on liberty will affect certain social groups in disproportionate ways?  What does the catastrophic impact of COVID-19 on residents of nursing homes say about our priorities as a society and to what extent is their plight our collective responsibility?  What steps have been taken to protect marginalised communities who are at greater risk from an outbreak of infectious disease: for example, people who have no choice but to coexist in close proximity with one another in direct provision centres, in prison settings and on halting sites?

The info is here.

Wednesday, June 24, 2020

Shifting prosocial intuitions: neurocognitive evidence for a value-based account of group-based cooperation

Leor M Hackel, Julian A Wills, Jay J Van Bavel
Social Cognitive and Affective Neuroscience
nsaa055, https://doi.org/10.1093/scan/nsaa055

Abstract

Cooperation is necessary for solving numerous social issues, including climate change, effective governance and economic stability. Value-based decision models contend that prosocial tendencies and social context shape people’s preferences for cooperative or selfish behavior. Using functional neuroimaging and computational modeling, we tested these predictions by comparing activity in brain regions previously linked to valuation and executive function during decision-making—the ventromedial prefrontal cortex (vmPFC) and dorsolateral prefrontal cortex (dlPFC), respectively. Participants played Public Goods Games with students from fictitious universities, where social norms were selfish or cooperative. Prosocial participants showed greater vmPFC activity when cooperating and dlPFC-vmPFC connectivity when acting selfishly, whereas selfish participants displayed the opposite pattern. Norm-sensitive participants showed greater dlPFC-vmPFC connectivity when defying group norms. Modeling expectations of cooperation was associated with activity near the right temporoparietal junction. Consistent with value-based models, this suggests that prosocial tendencies and contextual norms flexibly determine whether people prefer cooperation or defection.

From the Discussion section

The current research further indicates that norms shape cooperation. Participants who were most attentive to norms aligned their behavior with norms and showed greater right dlPFC-vmPFC connectivity when deviating from norms, whereas the least attentive participants showed the reverse pattern. Curiously, we found no clear evidence that decisions to conform were more valued than decisions to deviate. This conflicts with work suggesting social norms boost the value of norm compliance (Nook and Zaki, 2015). Instead, our findings suggest that norm compliance can also stem from increased functional connectivity between vmPFC and dlPFC.

The research is here.

Monday, June 15, 2020

The dual evolutionary foundations of political ideology

S. Claessens, K. Fischer, and others
PsyArXiv
Originally published 18 June 19

Abstract

What determines our views on taxation and crime, healthcare and religion, welfare and gender roles? And why do opinions about these seemingly disparate aspects of our social lives coalesce the way they do? Research over the last 50 years has suggested that political attitudes and values around the globe are shaped by two ideological dimensions, often referred to as economic and social conservatism. However, it remains unclear why this ideological structure exists. Here, we highlight the striking concordance between these two dimensions of ideology and two key aspects of human sociality: cooperation and group conformity. Humans cooperate to a greater degree than our great ape relatives, paying personal costs to benefit others. Humans also conform to group-wide social norms and punish norm violators in interdependent, culturally marked groups. Together, these two shifts in sociality are posited to have driven the emergence of large-scale complex human societies. We argue that fitness trade-offs and behavioural plasticity have maintained strategic individual differences in both cooperation and group conformity, naturally giving rise to the two dimensions of political ideology. Supported by evidence from psychology, behavioural genetics, behavioural economics, and primatology, this evolutionary framework promises novel insight into the biological and cultural basis of political ideology.

The research is here.

Thursday, June 4, 2020

A Value-Based Framework for Understanding Cooperation

Pärnamets, P., Shuster, A., Reinero, D. A.,
& Van Bavel, J. J. (2020)
Current Directions in Psychological Science. 
https://doi.org/10.1177/0963721420906200

Abstract

Understanding the roots of human cooperation, a social phenomenon embedded in pressing issues including climate change and social conflict, requires an interdisciplinary perspective. We propose a unifying value-based framework for understanding cooperation that integrates neuroeconomic models of decision-making with psychological variables involved in cooperation. We propose that the ventromedial prefrontal cortex serves as a neural integration hub for value computation during cooperative decisions, receiving inputs from various neurocognitive processes such as attention, memory, and learning. Next, we describe findings from social and personality psychology highlighting factors that shape the value of cooperation, including research on contexts and norms, personal and social identity, and intergroup relations. Our approach advances theoretical debates about cooperation by highlighting how previous findings are accommodated within a general value-based framework and offers novel predictions.

The paper is here.


Monday, February 17, 2020

Religion’s Impact on Conceptions of the Moral Domain

S. Levine, and others
PsyArXiv Preprints
Last edited 2 Jan 20

Abstract

How does religious affiliation impact conceptions of the moral domain? Putting aside the question of whether people from different religions agree about how to answer moral questions, here we investigate a more fundamental question: How much disagreement is there across religions about which issues count as moral in the first place? That is, do people from different religions conceptualize the scope of morality differently? Using a new methodology to map out how individuals conceive of the moral domain, we find dramatic differences among adherents of different religions. Mormons and Muslims moralize their religious norms, while Jews do not. Hindus do not seem to make a moral/non-moral distinction at all. These results suggest that religious affiliation has a profound effect on conceptions of the scope of morality.

From the General Discussion:

The results of Study 3 and 3a are predicted by neither Social Domain Theory nor Moral Foundations Theory: It is neither true that secular people and religious people share a common conception of the moral domain (as Social Domain Theory argues), nor that religious morality is expanded beyond secular morality in a uniform manner (as Moral Foundations Theory suggests).When participants in a group did make a moral/non-moral distinction, there was broad agreement that norms related to harm, justice, and rights count as moral norms. However, some religious individuals (such as the Mormon and Muslim participants) also moralized norms from their own religion that are not related to these themes. Meanwhile, others (such as the Jewish participants) acknowledged the special status of their own norms but did not moralize them. Yet others (such as the Hindu participants) made no distinction between the moral and the non-moral. 

The research is here.

Thursday, December 19, 2019

Where AI and ethics meet

Stephen Fleischresser
Cosmos Magazine
Originally posted 18 Nov 19

Here is an excerpt:

His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place other’s interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a “defining quality of a profession for its practitioners to be part of a ‘moral community’ with common aims, values and training”.

For the field of AI research, however, the same cannot be said. “AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts,” Mittelstadt writes. “The fundamental aims of developers, users and affected parties do not necessarily align.”

Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.

“AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests,” he writes. In AI research, “public interests are not granted primacy over commercial interests”.

In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, “AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks”.

Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.

The info is here.

Wednesday, November 27, 2019

Corruption Is Contagious: Dishonesty begets dishonesty, rapidly spreading unethical behavior through a society

Dan Ariely & Ximena Garcia-Rada
Scientific American
September 2019

Here is an excerpt:

This is because social norms—the patterns of behavior that are accepted as normal—impact how people will behave in many situations, including those involving ethical dilemmas. In 1991 psychologists Robert B. Cialdini, Carl A. Kallgren and Raymond R. Reno drew the important distinction between descriptive norms—the perception of what most people do—and injunctive norms—the perception of what most people approve or disapprove of. We argue that both types of norms influence bribery.

Simply put, knowing that others are paying bribes to obtain preferential treatment (a descriptive norm) makes people feel that it is more acceptable to pay a bribe themselves.

Similarly, thinking that others believe that paying a bribe is acceptable (an injunctive norm) will make people feel more comfortable when accepting a bribe request. Bribery becomes normative, affecting people's moral character.

In 2009 Ariely, with behavioral researchers Francesca Gino and Shahar Ayal, published a study showing how powerful social norms can be in shaping dishonest behavior. In two lab studies, they assessed the circumstances in which exposure to others' unethical behavior would change someone's ethical decision-making. Group membership turned out to have a significant effect: When individuals observed an in-group member behaving dishonestly (a student with a T-shirt suggesting he or she was from the same school cheating in a test), they, too, behaved dishonestly. In contrast, when the person behaving dishonestly was an out-group member (a student with a T-shirt from the rival school), observers acted more honestly.

But social norms also vary from culture to culture: What is acceptable in one culture might not be acceptable in another. For example, in some societies giving gifts to clients or public officials demonstrates respect for a business relationship, whereas in other cultures it is considered bribery. Similarly, gifts for individuals in business relationships can be regarded either as lubricants of business negotiations, in the words of behavioral economists Michel André Maréchal and Christian Thöni, or as questionable business practices. And these expectations and rules about what is accepted are learned and reinforced by observation of others in the same group. Thus, in countries where individuals regularly learn that others are paying bribes to obtain preferential treatment, they determine that paying bribes is socially acceptable. Over time the line between ethical and unethical behavior becomes blurry, and dishonesty becomes the “way of doing business.”

The info is here.

Tuesday, September 17, 2019

When do we punish people who don’t?

Martin, J., Jordan, J., Rand, D., & Cushman, F.
(2019). Cognition, 193(August)
doi.org/10.1016/j.cognition.2019.104040

Abstract

People often punish norm violations. In what cases is such punishment viewed as normative—a behavior that we “should” or even “must” engage in? We approach this question by asking when people who fail to punish a norm violator are, themselves, punished. (For instance, a boss who fails to punish transgressive employees might, herself, be fired). We conducted experiments exploring the contexts in which higher-order punishment occurs, using both incentivized economic games and hypothetical vignettes describing everyday situations. We presented participants with cases in which an individual fails to punish a transgressor, either as a victim (second-party) or as an observer (third-party). Across studies, we consistently observed higher-order punishment of non-punishing observers. Higher-order punishment of non-punishing victims, however, was consistently weaker, and sometimes non-existent. These results demonstrate the selective application of higher-order punishment, provide a new perspective on the psychological mechanisms that support it, and provide some clues regarding its function.

The research can be found here.

Tuesday, July 9, 2019

Does psychology have a conflict-of-interest problem?

Tom Chivers
Nature
Originally published July 2, 2019

Here is an excerpt:

But other psychologists say they think personal speaking fees ought to be declared. There is no suggestion that any scientists are deliberately skewing their results to maintain their speaking income. But critics say that lax COI disclosure norms could create problems by encouraging some scientists to play down — perhaps unconsciously — findings that contradict their arguments, and could lead them to avoid declaring other conflicts. “A lot of researchers don’t know where to draw the line [on COIs],” says Chris Chambers, a psychologist at the University of Cardiff, UK, who is an editor for five journals, including one on psychology. “And because there are no norms they gravitate to saying nothing.”

Researchers who spoke to Nature about their concerns say they see the issue as connected to psychology’s greater need for self-scrutiny because of some high-profile cases of misconduct, as well as to broader concerns about the reproducibility of results. “Even the appearance of an undisclosed conflict of interest can be damaging to the credibility of psychological science,” says Scott Lilienfeld, the editor-in-chief of Clinical Psychological Science (CPS), which published papers of Twenge’s in 2017 and 2018. “The heuristic should be ‘when in doubt, declare’,” he says (although he added that he did not have enough information to judge Twenge’s non-disclosures in CPS). Psychology, he adds, needs to engage in a “thoroughgoing discussion of what constitutes a conflict of interest, and when and how such conflicts should be disclosed”.

The info is here.

Thursday, May 9, 2019

The moral behavior of ethics professors: A replication-extension in German-speaking countries

Philipp Schönegger & Johannes Wagner
(2019) Philosophical Psychology, 32:4, 532-559
DOI: 10.1080/09515089.2019.1587912

Abstract

What is the relation between ethical reflection and moral behavior? Does professional reflection on ethical issues positively impact moral behaviors? To address these questions, Schwitzgebel and Rust empirically investigated if philosophy professors engaged with ethics on a professional basis behave any morally better or, at least, more consistently with their expressed values than do non-ethicist professors. Findings from their original US-based sample indicated that neither is the case, suggesting that there is no positive influence of ethical reflection on moral action. In the study at hand, we attempted to cross-validate this pattern of results in the German-speaking countries and surveyed 417 professors using a replication-extension research design. Our results indicate a successful replication of the original effect that ethicists do not behave any morally better compared to other academics across the vast majority of normative issues. Yet, unlike the original study, we found mixed results on normative attitudes generally. On some issues, ethicists and philosophers even expressed more lenient attitudes. However, one issue on which ethicists not only held stronger normative attitudes but also reported better corresponding moral behaviors was vegetarianism.

Friday, April 19, 2019

Leader's group-norm violations elicit intentions to leave the group – If the group-norm is not affirmed

Lara Ditrich, AdrianLüders, Eva Jonas, & Kai Sassenberg
Journal of Experimental Social Psychology
Available online 2 April 2019

Abstract

Group members, even central ones like group leaders, do not always adhere to their group's norms and show norm-violating behavior instead. Observers of this kind of behavior have been shown to react negatively in such situations, and in extreme cases, may even leave their group. The current work set out to test how this reaction might be prevented. We assumed that group-norm affirmations can buffer leaving intentions in response to group-norm violations and tested three potential mechanisms underlying the buffering effect of group-norm affirmations. To this end, we conducted three experiments in which we manipulated group-norm violations and group-norm affirmations. In Study 1, we found group-norm affirmations to buffer leaving intentions after group-norm violations. However, we did not find support for the assumption that group-norm affirmations change how a behavior is evaluated or preserve group members' identification with their group. Thus, neither of these variables can explain the buffering effect of group-norm affirmations. Studies 2 & 3 revealed that group-norm affirmations instead reduce perceived effectiveness of the norm-violator, which in turn predicted lower leaving intentions. The present findings will be discussed based on previous research investigating the consequences of norm violations.

The research is here.

Tuesday, February 19, 2019

How Our Attitude Influences Our Sense Of Morality

Konrad Bocian
Science Trend
Originally posted January 18, 2019

Here is an excerpt:

People think that their moral judgment is as rational and objective as scientific statements, but science does not confirm that belief. Within the two last decades, scholars interested in moral psychology discovered that people produce moral judgments based on fast and automatic intuitions than rational and controlled reasoning. For example, moral cognition research showed that moral judgments arise in approximately 250 milliseconds, and even then we are not able to explain them. Developmental psychologists proved that at already the age of 3 months, babies who do not have any lingual skills can distinguish a good protagonist (a helping one) from a bad one (a hindering one). But this does not mean that peoples’ moral judgments are based solely on intuitions. We can use deliberative processes when conditions are favorable – when we are both motivated to engage in and capable of conscious responding.

When we imagine how we would morally judge other people in a specific situation, we refer to actual rules and norms. If the laws are violated, the act itself is immoral. But we forget that intuitive reasoning also plays a role in forming a moral judgment. It is easy to condemn the librarian when our interest is involved on paper, but the whole picture changes when real money is on the table. We have known that rule for a very long time, but we still forget to use it when we predict our moral judgments.

Based on previous research on the intuitive nature of moral judgment, we decided to test how far our attitudes can impact our perception of morality. In our daily life, we meet a lot of people who are to some degree familiar, and we either have a positive or negative attitude toward these people.

The info is here.