Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Asymmetry. Show all posts
Showing posts with label Asymmetry. Show all posts

Saturday, June 10, 2023

Generative AI entails a credit–blame asymmetry

Porsdam Mann, S., Earp, B. et al. (2023).
Nature Machine Intelligence.

The recent releases of large-scale language models (LLMs), including OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA, and Google’s Bard have garnered substantial global attention, leading to calls for urgent community discussion of the ethical issues involved. LLMs generate text by representing and predicting statistical properties of language. Optimized for statistical patterns and linguistic form rather than for
truth or reliability, these models cannot assess the quality of the information they use.

Recent work has highlighted ethical risks that are associated with LLMs, including biases that arise from training data; environmental and socioeconomic impacts; privacy and confidentiality risks; the perpetuation of stereotypes; and the potential for deliberate or accidental misuse. We focus on a distinct set of ethical questions concerning moral responsibility—specifically blame and credit—for LLM-generated
content. We argue that different responsibility standards apply to positive and negative uses (or outputs) of LLMs and offer preliminary recommendations. These include: calls for updated guidance from policymakers that reflect this asymmetry in responsibility standards; transparency norms; technology goals; and the establishment of interactive forums for participatory debate on LLMs.‌


Credit–blame asymmetry may lead to achievement gaps

Since the Industrial Revolution, automating technologies have made workers redundant in many industries, particularly in agriculture and manufacturing. The recent assumption25 has been that creatives
and knowledge workers would remain much less impacted by these changes in the near-to-mid-term future. Advances in LLMs challenge this premise.

How these trends will impact human workforces is a key but unresolved question. The spread of AI-based applications and tools such as LLMs will not necessarily replace human workers; it may simply
shift them to tasks that complement the functions of the AI. This may decrease opportunities for human beings to distinguish themselves or excel in workplace settings. Their future tasks may involve supervising or maintaining LLMs that produce the sorts of outputs (for example, text or recommendations) that skilled human beings were previously producing and for which they were receiving credit. Consequently, work in a world relying on LLMs might often involve ‘achievement gaps’ for human beings: good, useful outcomes will be produced, but many of them will not be achievements for which human workers and professionals can claim credit.

This may result in an odd state of affairs. If responsibility for positive and negative outcomes produced by LLMs is asymmetrical as we have suggested, humans may be justifiably held responsible for negative outcomes created, or allowed to happen, when they or their organizations make use of LLMs. At the same time, they may deserve less credit for AI-generated positive outcomes, as they may not be displaying the skills and talents needed to produce text, exerting judgment to make a recommendation, or generating other creative outputs.

Sunday, May 28, 2023

Above the law? How motivated moral reasoning shapes evaluations of high performer unethicality

Campbell, E. M., Welsh, D. T., & Wang, W. (2023).
Journal of Applied Psychology.
Advance online publication.


Recent revelations have brought to light the misconduct of high performers across various fields and occupations who were promoted up the organizational ladder rather than punished for their unethical behavior. Drawing on principles of motivated moral reasoning, we investigate how employee performance biases supervisors’ moral judgment of employee unethical behavior and how supervisors’ performance-focus shapes how they account for moral judgments in promotion recommendations. We test our model in three studies: a field study of 587 employees and their 124 supervisors at a Fortune 500 telecom company, an experiment with two samples of working adults, and an experiment that directly varied explanatory mechanisms. Evidence revealed a moral double standard such that supervisors rendered less punitive judgment of the unethical acts of higher performing employees. In turn, supervisors’ bottom-line mentality (i.e., fixation on achieving results) influenced the degree to which they incorporated their punitive judgments into promotability considerations. By revealing the moral leniency afforded to higher performers and the uneven consequences meted out by supervisors, our results carry implications for behavioral ethics research and for organizations seeking to retain and promote their higher performers while also maintaining ethical standards that are applied fairly across employees.

Here is the opening:

Allegations of unethical conduct perpetrated by prominent, high-performing professionals have been exploding across newsfeeds (Zacharek et al., 2017). From customer service employees and their managers (e.g., Wells Fargo fake accounts; Levitt & Schoenberg, 2020), to actors, producers, and politicians (e.g., long-term corruption of Belarus’ president; Simmons, 2020), to reporters and journalists (e.g., the National Broadcasting Company’s alleged cover-up; Farrow, 2019), to engineers and executives (e.g., Volkswagen’s emissions fraud; Vlasic, 2017), the public has been repeatedly shocked by the egregious behaviors committed by individuals recognized as high performers within their respective fields (Bennett, 2017). 

In the wake of such widespread unethical, corrupt, and exploitative behavior, many have wondered how supervisors could have systematically ignored the conduct of high-performing individuals for so long while they ascended organizational ladders. How could such misconduct have resulted in their advancement to leadership roles rather than stalled or derailed the transgressors’ careers?

The story of Carlos Ghosn at Nissan hints at why and when individuals’ unethical behavior (i.e., lying, cheating, and stealing; Treviño et al., 2006, 2014) may result in less punitive judgment (i.e., the extent to which observed behavior is morally evaluated as negative, incorrect, or inappropriate). During his 30-year career in the automotive industry, Ghosn differentiated himself as a high performer known for effective cost-cutting, strategic planning, and spearheading change; however, in 2018, he fell from grace over allegations of years of financial malfeasance and embezzlement (Leggett, 2019). When allegations broke, Nissan’s CEO stood firm in his punitive judgment that Ghosn’s behavior “cannot be tolerated by the company” (Kageyama, 2018). Still, many questioned why the executives levied judgment on the misconduct that they had overlooked for years. Tokyo bureau chief of the New York Times, Motoko Rich, reasoned that Ghosn “probably would have continued to get away with it … if the company was continuing to be successful. But it was starting to slow down. There were signs that the magic had gone” (Barbaro, 2019). Similarly, an executive pointed squarely to the relevance of Ghosn’s performance, lamenting: “what [had he] done for us lately?” (Chozick & Rich, 2018). As a high performer, Ghosn’s unethical behavior evaded punitive judgment and career consequences from Nissan executives, but their motivation to leniently judge Ghosn’s behavior seemed to wane with his level of performance. In her reporting, Rich observed: “you can get away with whatever you want as long as you’re successful. And once you’re not so successful anymore, then all that rule-breaking and brashness doesn’t look so attractive and appealing anymore” (Barbaro, 2019).

Wednesday, April 26, 2023

A Prosociality Paradox: How Miscalibrated Social Cognition Creates a Misplaced Barrier to Prosocial Action

Epley, N., Kumar, A., Dungan, J., &
Echelbarger, M. (2023).
Current Directions in Psychological Science,
32(1), 33–41. 


Behaving prosocially can increase well-being among both those performing a prosocial act and those receiving it, and yet people may experience some reluctance to engage in direct prosocial actions. We review emerging evidence suggesting that miscalibrated social cognition may create a psychological barrier that keeps people from behaving as prosocially as would be optimal for both their own and others’ well-being. Across a variety of interpersonal behaviors, those performing prosocial actions tend to underestimate how positively their recipients will respond. These miscalibrated expectations stem partly from a divergence in perspectives, such that prosocial actors attend relatively more to the competence of their actions, whereas recipients attend relatively more to the warmth conveyed. Failing to fully appreciate the positive impact of prosociality on others may keep people from behaving more prosocially in their daily lives, to the detriment of both their own and others’ well-being.

Undervaluing Prosociality

It may not be accidental that William James (1896/1920) named “the craving to be appreciated” as “the deepest principle in human nature” only after receiving a gift of appreciation that he described as “the first time anyone ever treated me so kindly.” “I now perceive one immense omission in my [Principles of Psychology],” he wrote regarding the importance of appreciation. “I left it out altogether . . . because I had never had it gratified till now” (p. 33).

James does not seem to be unique in failing to recognize the positive impact that appreciation can have on recipients. In one experiment (Kumar & Epley, 2018, Experiment 1), MBA students thought of a person they felt grateful to, but to whom they had not yet expressed their appreciation. The students, whom we refer to as expressers, wrote a gratitude letter to this person and then reported how they expected the recipient would feel upon receiving it: how surprised the recipient would be to receive the letter, how surprised the recipient would be about the content, how negative or positive the recipient would feel, and how awkward the recipient would feel. Expressers willing to do so then provided recipients’ email addresses so the recipients could be contacted to report how they actually felt receiving their letter. Although expressers recognized that the recipients would feel positive, they did not recognize just how positive the recipients would feel: Expressers underestimated how surprised the recipients would be to receive the letter, how surprised the recipients would be by its content, and how positive the recipients would feel, whereas they overestimated how awkward the recipients would feel. Table 1 shows the robustness of these results across an additional published experiment and 17 subsequent replications (see Fig. 1 for overall results; full details are available at OSF: osf.io/7wndj/). Expressing gratitude has a reliably more positive impact on recipients than expressers expect.


How much people genuinely care about others has been debated for centuries. In summarizing the purely selfish viewpoint endorsed by another author, Thomas Jefferson (1854/2011) wrote, “I gather from his other works that he adopts the principle of Hobbes, that justice is founded in contract solely, and does not result from the construction of man.” Jefferson felt differently: “I believe, on the contrary, that it is instinct, and innate, that the moral sense is as much a part of our constitution as that of feeling, seeing, or hearing . . . that every human mind feels pleasure in doing good to another” (p. 39).

Such debates will never be settled by simply observing human behavior because prosociality is not simply produced by automatic “instinct” or “innate” disposition, but rather can be produced by complicated social cognition (Miller, 1999). Jefferson’s belief that people feel “pleasure in doing good to another” is now well supported by empirical evidence. However, the evidence we reviewed here suggests that people may avoid experiencing this pleasure not because they do not want to be good to others, but because they underestimate just how positively others will react to the good being done to them.

Saturday, March 19, 2022

The Content of Our Character

Brown, Teneille R.
Available at SSRN: https://ssrn.com/abstract=3665288


The rules of evidence assume that jurors can ignore most character evidence, but the data are clear. Jurors simply cannot *not* make character inferences. We are so driven to use character to assess blame, that we will spontaneously infer traits based on whatever limited information is available. In fact, within just 0.1 seconds of meeting someone, we have already decided if we think they are intelligent, trustworthy, likable, or kind--based just on the person’s face. This is a completely unregulated source of evidence, and yet it predicts teaching evaluations, electoral success, and even sentencing decisions. Given the pervasive and unintentional nature of “spontaneous trait inferences” (STIs), they are not susceptible to mitigation through jury instructions. However, recognizing that witnesses will be viewed as more or less trustworthy based just on their face, the rules of evidence must permit more character evidence, rather than less. This article harnesses undisputed findings from social psychology to propose a reversal of the ban on character evidence, in favor of a strong presumption against admissibility for immoral traits only. This removes a great deal from the rule’s crosshairs and re-tethers it to its normative roots. My proposal does not rely on the gossamer thin distinction between propensity and non-propensity uses, because once jurors hear about past act evidence, they will subconsciously draw an impermissible character inference. However, in some cases this might not be unfairly prejudicial, and may even be necessary for justice. The critical contribution of this article is that while shielding jurors from character evidence has noble origins, it also has unintended, negative consequences. When jurors cannot hear about how someone acted in the past, they will instead rely on immutable facial features—connected to racist, sexist and classist stereotypes—to draw character inferences that are even more inaccurate and unfair.

Here is a section

Moral Character Impacts Ratings of Intent

Previous models of intentionality held that for an act to be considered intentional, three things had to be present. The actor must have believed that an action would result in a particular outcome, desired this outcome, and had full awareness of his behavior. Research now challenges this account, “showing that individuals attribute intentions to others even (and largely) in the absence of these components.”  Even where an actor could not have acted otherwise, and thus was coerced to kill, study participants found the actor to be more morally responsible for an act if he “identified” with it, meaning that he desired the compelled outcome. These findings do not fit with our typical model of blame, which requires freedom to act in order to assign responsibility.  However, they make sense if we adopt a character-based approach to
blame. We are quick to infer a bad character and intent when there is very little evidence of it.  

An example of this is the hindsight bias called the “praise-blame asymmetry,” where people blame actors for accidental bad outcomes that they caused but did not intend, but do not praise people for accidental good outcomes that they likewise caused but did not intend. The classic example is the CEO who considers a development project that will increase profits. The CEO is agnostic to the project’s environmental effects and gives it the go-ahead. If the project’s outcome turns out to harm the environment, people say the CEO intended the bad outcome and they blame him for it. However, if instead the project turns out to benefit the environment, the CEO receives no praise. Our folk conception of intentionality is tied to morality and aversion to negative outcomes. If a foreseen outcome is negative, people will attribute intentionality to the decision-maker, but not if the foreseen outcome is positive; the overattribution of intent only seems to cut one way. Mens rea ascriptions are “sensitive to moral valence . . . . If the outcome is negative, foreknowledge standardly suffices for people to ascribe intentionality.” This effect has been found not just in laypeople, but also in French judges. If an action is considered immoral, then our emotional reaction to it can bias mental state ascriptions.

Tuesday, February 15, 2022

How do people use ‘killing’, ‘letting die’ and related bioethical concepts? Contrasting descriptive and normative hypotheses

Rodríguez-Arias, D., et al., (2009)
Bioethics 34(5)


Bioethicists involved in end-of-life debates routinely distinguish between ‘killing’ and ‘letting die’. Meanwhile, previous work in cognitive science has revealed that when people characterize behaviour as either actively ‘doing’ or passively ‘allowing’, they do so not purely on descriptive grounds, but also as a function of the behaviour’s perceived morality. In the present report, we extend this line of research by examining how medical students and professionals (N = 184) and laypeople (N = 122) describe physicians’ behaviour in end-of-life scenarios. We show that the distinction between ‘ending’ a patient’s life and ‘allowing’ it to end arises from morally motivated causal selection. That is, when a patient wishes to die, her illness is treated as the cause of death and the doctor is seen as merely allowing her life to end. In contrast, when a patient does not wish to die, the doctor’s behaviour is treated as the cause of death and, consequently, the doctor is described as ending the patient’s life. This effect emerged regardless of whether the doctor’s behaviour was omissive (as in withholding treatment) or commissive (as in applying a lethal injection). In other words, patient consent shapes causal selection in end-of-life situations, and in turn determines whether physicians are seen as ‘killing’ patients, or merely as ‘enabling’ their death.

From the Discussion

Across three  cases of  end-of-life  intervention, we find  convergent evidence  that moral  appraisals shape behavior description (Cushman et al., 2008) and causal selection (Alicke, 1992; Kominsky et al., 2015). Consistent  with  the  deontic  hypothesis,  physicians  who  behaved  according  to  patients’  wishes  were described as allowing the patient’s life to end. In contrast, physicians who disregarded the patient’s wishes were  described  as  ending the  patient’s  life.  Additionally,  patient  consent  appeared  to  inform  causal selection: The doctor was seen as the cause of death when disregarding the patient’s will; but the illness was seen as the cause of death when the doctor had obeyed the patient’s will.

Whether the physician’s behavior was omissive or commissive did not play a comparable role in behavior description or causal  selection. First, these  effects were weaker  than those of patient consent. Second,  while the  effects  of  consent  generalized to  medical  students  and  professionals,  the  effects of commission arose only among lay respondents. In other words, medical students and professionals treated patient consent as the sole basis for the doing/allowing distinction.  

Taken together, these  results confirm that  doing and  allowing serve a  fundamentally evaluative purpose (in  line with  the deontic  hypothesis,  and Cushman  et al.,  2008),  and only  secondarily serve  a descriptive purpose, if at all. 

Saturday, February 12, 2022

Privacy and digital ethics after the pandemic

Carissa Véliz
Nature Electronics
VOL 4 | January 2022, 10, 11.

The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks,
which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big.  Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.  

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet, and Facebook has shadow profiles on netizens even when they have never had an account on the platform.


Reasons for optimism

Despite the concerning trends regarding privacy and digital ethics during the pandemic, there are reasons to be cautiously optimistic about the future.  First, citizens around the world are increasingly suspicious of tech companies, and are gradually demanding more from them. Second, there is a growing awareness that the lack of privacy ingrained in current apps entails a national security risk, which can motivate governments into action. Third, US President Joe Biden seems eager to collaborate with the international community, in contrast to his predecessor. Fourth, regulators in the US are seriously investigating how to curtail tech’s power, as evidenced by the Department of Justice’s antitrust lawsuit against Google and the Federal Trade Commission’s (FTC) antitrust lawsuit against Facebook.  Amazon and YouTube have also been targeted by the FTC for a privacy investigation. With discussions of a federal privacy law becoming more common in the US, it would not be surprising to see such a development in the next few years. Tech regulation in the US could have significant ripple effects elsewhere.

Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 


People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so. 

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 


Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.


• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Saturday, October 10, 2020

A Theory of Moral Praise

Anderson, R. A, Crockett, M. J., & Pizarro, D.
Trends in Cognitive Sciences
Volume 24, Issue 9, September 2020, 
Pages 694-703


How do people judge whether someone deserves moral praise for their actions?  In contrast to the large literature on moral blame, work on how people attribute praise has, until recently, been scarce. However, there is a growing body of recent work from a variety of subfields in psychology (including social, cognitive, developmental, and consumer) suggesting that moral praise is a fundamentally unique form of moral attribution and not simply the positive moral analogue of
blame attributions. A functional perspective helps explain asymmetries in blame and praise: we propose that while blame is primarily for punishment and signaling one’s moral character, praise is primarily for relationship building.

Concluding Remarks

Moral praise, we have argued, is a psychological response that, like other forms of moral judgment,
serves a particular functional role in establishing social bonds, encouraging cooperative alliances,
and promoting good behavior. Through this lens, seemingly perplexing asymmetries between
judgments of blame for immoral acts and judgments of praise for moral acts can be understood
as consistent with the relative roles, and associated costs, played by these two kinds of moral
judgments. While both blame and praise judgments require that an agent played some causal
and intentional role in the act being judged, praise appears to be less sensitive to these features
and more sensitive to more general features about an individual’s stable, underlying character
traits. In other words, we believe that the growth of studies on moral praise in the past few years
demonstrate that, when deciding whether or not doling out praise is justified, individuals seem to
care less on how the action was performed and far more about what kind of person performed
the action. We suggest that future research on moral attribution should seek to complement
the rich literature examining moral blame by examining potentially unique processes engaged in
moral praise, guided by an understanding of their differing costs and benefits, as well as their
potentially distinct functional roles in social life.

The article is here.

Friday, September 25, 2020

Science can explain other people’s minds, but not mine: self-other differences in beliefs about science

André Mata, Cláudia Simão & Rogério Gouveia
(2020) DOI: 10.1080/15298868.2020.1791950


Four studies show that people differ in their lay beliefs concerning the degree to which science can explain their mind and the minds of other people. In particular, people are more receptive to the idea that the psychology of other people is explainable by science than to the possibility of science explaining their own psychology. This self-other difference is moderated by the degree to which people associate a certain mental phenomenon with introspection. Moreover, this self-other difference has implications for the science-recommended products and practices that people choose for themselves versus others.

General discussion

These  studies  suggest  that  people  have  different  beliefs  regarding  what  science  can explain  about the  way they  think  versus  the  way  other  people  think.  Study 1 showed that,  in  general, people  see  science  as  better  able  to  explain  the  psychology  of other people than their own, and that this is particularly the case when a certain psychological phenomenon is highly associated with introspection (though there were other significant moderators  in this  study, and  results were  not consistent  across dependent  variables). Study 2 replicated  this interaction, whereby  science is seen as  having a greater explanatory  power  for  other  people  than  for  oneself,  but  that  this  is  only  the  case  when introspection is involved. Whereas Studies 1–2 provided correlational evidence,  Study 3 provided  an  experimental  test  of  the  role  of  introspection  in  self-other  differences  in thinking about science and  what it  can explain.  The results lent clear support to those of the previous  studies: For highly introspective phenomena, people believe that  science is better  at  making sense  of others than  of  themselves, whereas  this self-other  difference disappears  when introspection  is not  thought  to  be  involved.  Finally,  Study  4  demonstrated that this self-other difference has implications in terms of the choices that people make  for  themselves  and  how they  differ  from  the  choices that  they  advise others  to make.  In  particular, people  are  more reluctant  to  try certain  products  and  procedures targeted  at areas  of  their mental  life  that are  highly associated  with  introspection, but they are less reluctant to advise other people to try those same products and procedures. Lending  additional  support  to  the  role  of  introspection  in  generating  this  self-other difference,  this  choice-advice  asymmetry  was  not  observed  for  areas  that  were  not associated with  introspection.

A pdf can be downloaded here.

Wednesday, January 1, 2020

Companies Are Judged More Harshly For Their Ethical Failures If The CEO Is A Woman

Emily Reynolds
British Psychological Society
Originally published 19 Nov 19

Gender inequality in the business world has been much discussed over the last few years, with a host of mentoring schemes, grants, business books and political activity all aimed at getting women into leadership positions.

But what happens when this goal is achieved? According to new research, unequal gender dynamics still prevail even at the very top. Nicole Votolato Montgomery and Amanda P. Cowen from the University of Virginia found that women CEOs are judged far more harshly than their male counterparts when a business fails ethically. However, when a failure is down to incompetence, they find, women receive less negative backlash.


The team suggests that highlighting such traits in female leaders can “reduce the penalties for female-led organisations”. But others argue that women leaders shouldn’t give in to the pressure of adopting typically “male” traits, and that being helpful and community-focused are actually positive things to bring to the board room. Leaning into stereotypes may not be the best way, long-term, to break them — but either way, it’s clear there’s still a way to go for women in business.

The info is here.

Friday, November 30, 2018

The Knobe Effect From the Perspective of Normative Orders

Andrzej Waleszczyński, Michał Obidziński, & Julia Rejewska
Studia Humana Volume 7:4 (2018), pp. 9—15


The characteristic asymmetry in the attribution of intentionality in causing side effects, known as the Knobe effect, is considered to be a stable model of human cognition. This article looks at whether the way of thinking and analysing one scenario may affect the other and whether the mutual relationship between the ways in which both scenarios are analysed may affect the stability of the Knobe effect. The theoretical analyses and empirical studies performed are based on a distinction between moral and non-moral normativity possibly affecting the judgments passed in both scenarios. Therefore, an essential role in judgments about the intentionality of causing a side effect could be played by normative competences responsible for distinguishing between normative orders.

The research is here.

Sunday, June 24, 2018

Moral hindsight for good actions and the effects of imagined alternatives to reality

Ruth M.J. Byrne and Shane Timmons
Volume 178, September 2018, Pages 82–91


Five experiments identify an asymmetric moral hindsight effect for judgments about whether a morally good action should have been taken, e.g., Ann should run into traffic to save Jill who fell before an oncoming truck. Judgments are increased when the outcome is good (Jill sustained minor bruises), as Experiment 1 shows; but they are not decreased when the outcome is bad (Jill sustained life-threatening injuries), as Experiment 2 shows. The hindsight effect is modified by imagined alternatives to the outcome: judgments are amplified by a counterfactual that if the good action had not been taken, the outcome would have been worse, and diminished by a semi-factual that if the good action had not been taken, the outcome would have been the same. Hindsight modification occurs when the alternative is presented with the outcome, and also when participants have already committed to a judgment based on the outcome, as Experiments 3A and 3B show. The hindsight effect occurs not only for judgments in life-and-death situations but also in other domains such as sports, as Experiment 4 shows. The results are consistent with a causal-inference explanation of moral judgment and go against an aversive-emotion one.

• Judgments a morally good action should be taken are increased when it succeeds.
• Judgments a morally good action should be taken are not decreased when it fails.
• Counterfactuals that the outcome would have been worse amplify judgments.
• Semi-factuals that the outcome would have been the same diminish judgments.
• The asymmetric moral hindsight effect supports a causal-inference theory.

The research is here.