Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Norms. Show all posts
Showing posts with label Norms. Show all posts

Tuesday, June 11, 2024

Morals Versus Ethics: Building An Organizational Culture Of Trust And Transparency

Pamela Furr
Forbes.com
Originally posted 6 May 24

Here are two excerpts:

Prioritize Transparency And Integrity

Our team is a diverse mix of ages, cultures, races and backgrounds, and we all bring unique experiences and perspectives to the table. If a colleague says or does something that doesn’t sit right with you, take a moment to pause, process and then approach them. Share how you felt in the moment—this can be as simple as saying, “My feelings were hurt when you did that” or “I didn’t think the language you used earlier was appropriate.” Give them the opportunity to explain or apologize before gossiping with coworkers or silently holding onto resentments. Trust each other to have open, honest conversations, and you can often defuse conflicts before they escalate.

(cut)

Build A Sense Of Community

Set the tone for open dialogue and mutual respect in your organization. By modeling these values in your interactions with others, you can inspire your team to uphold the same standards. Foster a culture in which you advocate for yourself and others and try to learn from others as well. Approach things you don’t understand with a spirit of curiosity and compassion, assuming positive intent until proven otherwise. Ask questions, and truly seek to understand someone else’s point of view.

I believe that an essential part of being a leader is ensuring that our employees feel safe, protected and heard when they come to work. We can work to hold external governing boards accountable to the standards they set, but we can also do everything in our power to create a culture of trust, transparency and accountability within our own organizations.


Here is my summary:

The article discusses the difference between morals and ethics. Morals are personal beliefs and values that guide our actions, while ethics are a set of rules established by a community or governing body.

The author describes a situation where a trainee made a false sexual harassment claim against her mentor. The certifying board refused to take any action because they saw it as an employment contract issue. The author argues that governing boards should take a stronger stance in upholding ethics within their professions.

The article concludes with the author's thoughts on creating an ethical and transparent workplace culture. The author emphasizes the importance of open communication, understanding policies and procedures, and building a sense of community. By following these principles, organizations can create a safe and supportive environment for their employees.

Saturday, February 17, 2024

What Stops People From Standing Up for What’s Right?

Julie Sasse
Greater Good
Originally published 17 Jan 24

Here is an excerpt:

How can we foster moral courage?

Every person can try to become more morally courageous. However, it does not have to be a solitary effort. Instead, institutions such as schools, companies, or social media platforms play a significant role. So, what are concrete recommendations to foster moral courage?
  • Establish and strengthen social and moral norms: With a solid understanding of what we consider right and wrong, it becomes easier to detect wrongdoings. Institutions can facilitate this process by identifying and modeling fundamental values. For example, norms and values expressed by teachers can be important points of reference for children and young adults.
  • Overcome uncertainty: If it is unclear whether someone’s behavior is wrong, witnesses should feel comfortable to inquire, for example, by asking other bystanders how they judge the situation or a potential victim whether they are all right.
  • Contextualize anger: In the face of wrongdoings, anger should not be suppressed since it can provide motivational fuel for intervention. Conversely, if someone expresses anger, it should not be diminished as irrational but considered a response to something unjust. 
  • Provide and advertise reporting systems: By providing reporting systems, institutions relieve witnesses from the burden of selecting and evaluating individual means of intervention and reduce the need for direct confrontation.
  • Show social support: If witnesses directly confront a perpetrator, others should be motivated to support them to reduce risks.
We see that there are several ways to make moral courage less difficult, but they do require effort from individuals and institutions. Why is that effort worth it? Because if more individuals are willing and able to show moral courage, more wrongdoings would be addressed and rectified—and that could help us to become a more responsible and just society.


Main points:
  • Moral courage is the willingness to stand up for what's right despite potential risks.
  • It's rare because of various factors like complexity of the internal process, situational barriers, and difficulty seeing the long-term benefits.
  • Key stages involve noticing a wrongdoing, interpreting it as wrong, feeling responsible, believing in your ability to intervene, and accepting potential risks.
  • Personality traits and situational factors influence these stages.

Wednesday, November 29, 2023

A justification-suppression model of the expression and experience of prejudice

Crandall, C. S., & Eshleman, A. (2003).
Psychological bulletin, 129(3), 414–446.
https://doi.org/10.1037/0033-2909.129.3.414

Abstract

The authors propose a justification-suppression model (JSM), which characterizes the processes that lead to prejudice expression and the experience of one's own prejudice. They suggest that "genuine" prejudices are not directly expressed but are restrained by beliefs, values, and norms that suppress them. Prejudices are expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices. The same process accounts for which prejudices are accepted into the self-concept The JSM is used to organize the prejudice literature, and many empirical findings are recharacterized as factors affecting suppression or justification, rather than directly affecting genuine prejudice. The authors discuss the implications of the JSM for several topics, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.


This is an oldie, but goodie!!  Here is my summary:

This article is about prejudice and the factors that influence its expression. The authors propose a justification-suppression model (JSM) to explain how prejudice is expressed. The JSM suggests that people have genuine prejudices that are not directly expressed. Instead, these prejudices are suppressed by people’s beliefs, values, and norms. Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.

The authors also discuss the implications of the JSM for prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Here are some key takeaways from the article:
  • Prejudice is a complex phenomenon that is influenced by a variety of factors, including individual beliefs, values, and norms, as well as social and cultural contexts.
  • People may have genuine prejudices that they do not directly express. These prejudices may be suppressed by people’s beliefs, values, and norms.
  • Prejudice is expressed when justifications (e.g., attributions, ideologies, stereotypes) release suppressed prejudices.
  • The JSM can be used to explain a wide range of findings on prejudice, including prejudice measurement, ambivalence, and the distinction between prejudice and its expression.

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.


Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Friday, May 19, 2023

What’s wrong with virtue signaling?

Hill, J., Fanciullo, J. 
Synthese 201, 117 (2023).

Abstract

A novel account of virtue signaling and what makes it bad has recently been offered by Justin Tosi and Brandon Warmke. Despite plausibly vindicating the folk?s conception of virtue signaling as a bad thing, their account has recently been attacked by both Neil Levy and Evan Westra. According to Levy and Westra, virtue signaling actually supports the aims and progress of public moral discourse. In this paper, we rebut these recent defenses of virtue signaling. We suggest that virtue signaling only supports the aims of public moral discourse to the extent it is an instance of a more general phenomenon that we call norm signaling. We then argue that, if anything, virtue signaling will undermine the quality of public moral discourse by undermining the evidence we typically rely on from the testimony and norm signaling of others. Thus, we conclude, not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation.

Conclusion

In this paper, we have challenged two recent defenses of virtue signaling. Whereas Levy ascribes a number of good features to virtue signaling—its providing higher-order evidence for the truth of certain moral judgments, its helping us delineate groups of reliable moral cooperators, and its not involving any hypocrisy on the part of its subject—it seems these good features are ascribable to virtue signaling ultimately and only because they are good features of norm signaling, and virtue signaling entails norm signaling. Similarly, whereas Westra suggests that virtue signaling uniquely benefits public moral discourse by supporting moral progress in a way that mere norm signaling does not, it seems virtue signaling also uniquely harms public moral discourse by supporting moral regression in a way that mere norm signaling does not. It therefore seems that in each case, to the extent it differs from norm signaling, virtue signaling simply isn’t needed.

Moreover, we have suggested that, if anything, virtue signaling will undermine the higher order evidence we typically can and should rely on from the testimony of others. Virtue signaling essentially involves a motivation that aims at affecting public moral discourse but that does not aim at the truth. When virtue signaling is rampant—when we are aware that this ulterior motive is common among our peers—we should give less weight to the higher-order evidence provided by the testimony of others than we otherwise would, on pain of double counting evidence and falling for unwarranted confidence. We conclude, therefore, that not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation. 

Friday, March 17, 2023

Rational learners and parochial norms

Partington, S. Nichols, S., & Kushnir, T.
Cognition
Volume 233, April 2023, 105366

Abstract

Parochial norms are narrow in social scope, meaning they apply to certain groups but not to others. Accounts of norm acquisition typically invoke tribal biases: from an early age, people assume a group's behavioral regularities are prescribed and bounded by mere group membership. However, another possibility is rational learning: given the available evidence, people infer the social scope of norms in statistically appropriate ways. With this paper, we introduce a rational learning account of parochial norm acquisition and test a unique prediction that it makes. In one study with adults (N = 480) and one study with children ages 5- to 8-years-old (N = 120), participants viewed violations of a novel rule sampled from one of two unfamiliar social groups. We found that adults judgments of social scope – whether the rule applied only to the sampled group (parochial scope), or other groups (inclusive scope) – were appropriately sensitive to the relevant features of their statistical evidence (Study 1). In children (Study 2) we found an age difference: 7- to 8-year-olds used statistical evidence to infer that norms were parochial or inclusive, whereas 5- to 6-year olds were overall inclusive regardless of statistical evidence. A Bayesian analysis shows a possible inclusivity bias: adults and children inferred inclusive rules more frequently than predicted by a naïve Bayesian model with unbiased priors. This work highlights that tribalist biases in social cognition are not necessary to explain the acquisition of parochial norms.

From the General discussion

The widespread prevalence of parochial norms across history and cultures have led some to suggest parochialism is itself a human universal (Clark et al., 2019; Greene, 2013) in part owing to evolved, group-based biases in social norm acquisition (Chalik & Rhodes, 2020; Chudek & Henrich, 2011; Roberts et al., 2017). In this paper, we investigated whether a rational learning process can also explain this phenomenon. In Study 1, we found that adults can acquire distinctions of social scope in a statistically appropriate manner, and this finding was robust across two forms of measurement (rule judgments and open response). In Study 2, older children displayed the adult-like statistical sensitivity in their rule judgments, and even younger children did so in their open responses. Computational analyses suggests that rule judgments were inclusively biased: compared to an unbiased Bayesian learner, children tended to assume that novel rules apply to everyone in a candidate population. Adults also displayed an inclusive bias, albeit to a lesser extent than children.

Broadly, these findings suggest that rational learning processes can indeed explain the acquisition of parochial norms and highlight an important sense in which children's norm learning can be biased in the opposite direction of tribalism. At the least, the finding that children and adults are inclusively biased serves as an existence proof that deep-rooted tribal biases in social learning are not necessary to explain the acquisition of parochial norms. Rather, if children and adults are rational learners, they can acquire a parochial norm when presented with evidence that is consistent with parochialism. However, tribalism can still play a role in norm acquisition, for example, by influencing the sort of evidence that adults seek out, or the evidence to which children are exposed.

Saturday, January 28, 2023

The pervasive impact of ignorance

Kirfel, L., & Phillips, J.
Cognition
Volume 231, February 2023, 105316

Abstract

Norm violations have been demonstrated to impact a wide range of seemingly non-normative judgments. Among other things, when agents' actions violate prescriptive norms they tend to be seen as having done those actions more freely, as having acted more intentionally, as being more of a cause of subsequent outcomes, and even as being less happy. The explanation of this effect continue to be debated, with some researchers appealing to features of actions that violate norms, and other researcher emphasizing the importance of agents' mental states when acting. Here, we report the results of two large-scale experiments that replicate and extend twelve of the studies that originally demonstrated the pervasive impact of norm violations. In each case, we build on the pre-existing experimental paradigms to additionally manipulate whether the agents knew that they were violating a norm while holding fixed the action done. We find evidence for a pervasive impact of ignorance: the impact of norm violations on non-normative judgments depends largely on the agent knowing that they were violating a norm when acting. Moreover, we find evidence that the reduction in the impact of normality is underpinned by people's counterfactual reasoning: people are less likely to consider an alternative to the agent's action if the agent is ignorant. We situate our findings in the wider debate around the role or normality in people's reasoning.

General discussion

Studies show that norm violations influence a wide range of domains, including judgments of causation, freedom, happiness, doing vs. allowing, mental state ascriptions, and modal claims. A continuing debate centers on why normality has such a pervasive impact, and whether one should attempt to offer a unified explanation of these various effects (Hindriks, 2014). In this study, we found evidence that the epistemic state of norm-violating agents plays a fundamental role in the impact of norms on non-normative judgments. Across a wide range of intuitive judgments and highly different manipulations of an agents' knowledge, we found that the impact of normality on non-normative judgments was diminished when the agent did not know that they were violating a norm. More precisely, the agent's knowledge of the norm violation determined the extent to which abnormal actions increased judgments of causation, decreased attribution of force, increased attributions of intentional action, and so on. In other words, the impact of ignorance appears to be as pervasive as the impact of normality itself. In addition, our study showed that the agent's epistemic state also influenced to what extent people engage in reasoning about alternatives to the agent's action. If the agent was ignorant when they violated a norm, people were less inclined to consider what the agent could have done differently.

At the broadest level, the current results provide evidence that the pervasive impact of normality likely warrants a unified explanation at some level: we considered a specific feature that had been shown to moderate the impact of normality in one domain (causation) and demonstrated that this same feature of the impact of normality can be found across a wide range of other domains. This finding suggests that the impact of norms arises from a shared underlying mechanism that is recruited across domains. Specific accounts may, of course, seek to incorporate agents' epistemic states into their respective theory of how normality influences judgments in one particular domain. However, such an approach will miss out on a generalization and will necessarily be less parsimonious. Accordingly, we turn now to considering two broad approaches to offering a unified account of the pervasive impact of ignorance.

Wednesday, January 11, 2023

How neurons, norms, and institutions shape group cooperation

Van Bavel, J. J., Pärnamets, P., Reinero, D. A., 
& Packer, D. (2022, April 7).
https://doi.org/10.1016/bs.aesp.2022.04.004

Abstract

Cooperation occurs at all stages of human life and is necessary for small groups and large-scale societies alike to emerge and thrive. This chapter bridges research in the fields of cognitive neuroscience, neuroeconomics, and social psychology to help understand group cooperation. We present a value-based framework for understanding cooperation, integrating neuroeconomic models of decision-making with psychological and situational variables involved in cooperative behavior, particularly in groups. According to our framework, the ventromedial prefrontal cortex serves as a neural integration hub for value computation during cooperative decisions, receiving inputs from various neuro-cognitive processes such as attention, affect, memory, and learning. We describe factors that directly or indirectly shape the value of cooperation decisions, including cultural contexts and social norms, personal and social identity, and intergroup relations. We also highlight the role of economic, social, and cultural institutions in shaping cooperative behavior. We discuss the implications for future research on cooperation.

(cut)

Social Institutions

Trust production is crucial for fostering cooperation (Zucker, 1986). We have already discussed two forms of trust production above: the trust and resulting cooperation that develops from experience with and knowledge about individuals, and trust based on social identities. The third form of trust production is institution-based, in which formal mechanisms or processes are used to foster trust (and that do not rely on personal characteristics, a history of exchange, or identity characteristics). At the societal level, trust-supporting institutions include governments, corporate structures, criminal and civil legal systems, contract law and property rights, insurance, and stock markets. When they function effectively, institutions allow for broader cooperation, helping people extend trust beyond other people they know or know of and, crucially, also beyond the boundaries of their in-groups (Fabbri, 2022; Hruschka & Henrich, 2013; Rothstein & Stolle, 2008; Zucker, 1986). Conversely, when these sorts of structures do not function well, “institutional distrust strips away a basic sense that one is protected from exploitation, thus reducing trust between strangers, which is at the core of functioning societies” (van Prooijen, Spadaro, & Wang, 2022).

When strangers with different cultural backgrounds have to interact, it often lacks the interpersonal or group-level trust necessary for cooperation. For instance, reliance on tightly-knit social networks, where everyone knows everyone, is often impossible in larger, more diverse environments. Communities can compensate by relying more on group-based trust. For example, banks may loan money primarily within separate kin or ethnic groups (Zucker, 1986). However, the disruption of homogeneous social networks, combined with the increasing need to cooperate across group boundaries creates incentives to develop and participate in broader sets of institutions. Institutions can facilitate cooperation and individuals prefer institutions that help regulate interactions and foster trust.

People often may seek to build institutions embodying principles, norms, rules, or procedures that foster group-based cooperation. In turn, these institutions shape decisions by altering the value people place oncooperative decisions. One study, for instance, examined these institutional and psychological dynamics over 30 rounds of a public goods game (Gürerk, Irlenbusch & Rockenbach, 2006). Every round had three stages. First, participants chose whether they wanted to play that round with or without a “sanctioning institution” that would provide a means of rewarding or punishing other players based on their behavior in the game. Second, they played the public goods game with (and onlywith) other participants whohad selected the same institutional structure for that round. After making their decisions (to contribute to the common pool), they then saw how much everyone else in their institutional context had contributed. Third, participants who had opted to play the round with a sanctioning institution could choose, for a price, to punish or reward other players.

Tuesday, December 20, 2022

Doesn't everybody jaywalk? On codified rules that are seldom followed and selectively punished

Wylie, J., & Gantman, A.
Cognition, Volume 231, February 2023, 105323

Abstract

Rules are meant to apply equally to all within their jurisdiction. However, some rules are frequently broken without consequence for most. These rules are only occasionally enforced, often at the discretion of a third-party observer. We propose that these rules—whose violations are frequent, and enforcement is rare—constitute a unique subclass of explicitly codified rules, which we call ‘phantom rules’ (e.g., proscribing jaywalking). Their apparent punishability is ambiguous and particularly susceptible to third-party motives. Across six experiments, (N = 1,440) we validated the existence of phantom rules and found evidence for their motivated enforcement. First, people played a modified Dictator Game with a novel frequently broken and rarely enforced rule (i.e., a phantom rule). People enforced this rule more often when the “dictator” was selfish (vs. fair) even though the rule only proscribed fractional offers (not selfishness). Then we turned to third person judgments of the U.S. legal system. We found these violations are recognizable to participants as both illegal and commonplace (Experiment 2), differentiable from violations of prototypical laws (Experiments 3) and enforced in a motivated way (Experiments 4a and 4b). Phantom rule violations (but not prototypical legal violations) are seen as more justifiably punished when the rule violator has also violated a social norm (vs. rule violation alone)—unless the motivation to punish has been satiated (Experiment 5). Phantom rules are frequently broken, codified rules. Consequently, their apparent punishability is ambiguous, and their enforcement is particularly susceptible to third party motives.

General Discussion

In this paper, we identified a subset of rules, which are explicitly codified (e.g., in professional tennis, in an economic game, by the U.S. legal system), frequently violated, and rarely enforced. As a result, their apparent punishability is particularly ambiguous and subject to motivation. These rules show us that codified rules, which are meant to apply equally to all, can be used to sanction behaviors outside of their jurisdiction. We named this subclass of rules phantom rules and found evidence that people enforce them according to their desire to punish a different behavior (i.e., a social norm violation), recognize them in the U.S. legal system, and employ motivated reasoning to determine their punishability. We hypothesized and found, across behavioral and survey experiments, that phantom rules—rules where the descriptive norms of enforcement are low—seem enforceable, punishable, and legitimate only when one has an external active motivation to punish. Indeed, we found that phantom rules were judged to be more justifiably enforced and more morally wrong to violate when the person who broke the rule had also violated a social norm—unless they were also punished for that social norm violation. Together, we take this as evidence of the existence of phantom rules and the malleability of their apparent punishability via active (vs. satiated) punishment motivation.

The ambiguity of phantom rule enforcement makes it possible for them to serve a hidden function; they can be used to punish behavior outside of the purview of the official rules. Phantom rule violations are technically wrong, but on average, seen as less morally wrong.This means, for the most part, that people are unlikely to feel strongly when they see these rules violated, and indeed, people frequently violate phantom rules without consequence. This pattern fits well with previous work in experimental philosophy that shows that motivations can affect how we reason about what constitutes breaking a rule in the first place. For example, when rule breaking occurs blameless (e.g., unintentionally), people are less likely to say a rule was violated at all and look for reasons to excuse their behavior(Turri, 2019; Turri & Blouw, 2015). Indeed, our findings mirror this pattern. People find a reason to punish phantom rule violations only when people are particularly or dispositionally motivated to punish.

Monday, March 21, 2022

Confidence and gradation in causal judgment

O'Neill, K., Henne, P, et al.
Cognition
Volume 223, June 2022, 105036

Abstract

When comparing the roles of the lightning strike and the dry climate in causing the forest fire, one might think that the lightning strike is more of a cause than the dry climate, or one might think that the lightning strike completely caused the fire while the dry conditions did not cause it at all. Psychologists and philosophers have long debated whether such causal judgments are graded; that is, whether people treat some causes as stronger than others. To address this debate, we first reanalyzed data from four recent studies. We found that causal judgments were actually multimodal: although most causal judgments made on a continuous scale were categorical, there was also some gradation. We then tested two competing explanations for this gradation: the confidence explanation, which states that people make graded causal judgments because they have varying degrees of belief in causal relations, and the strength explanation, which states that people make graded causal judgments because they believe that causation itself is graded. Experiment 1 tested the confidence explanation and showed that gradation in causal judgments was indeed moderated by confidence: people tended to make graded causal judgments when they were unconfident, but they tended to make more categorical causal judgments when they were confident. Experiment 2 tested the causal strength explanation and showed that although confidence still explained variation in causal judgments, it did not explain away the effects of normality, causal structure, or the number of candidate causes. Overall, we found that causal judgments were multimodal and that people make graded judgments both when they think a cause is weak and when they are uncertain about its causal role.

From the General Discussion

The current paper sought to address two major questions regarding singular causal judgments: are causal judgments graded, and if so, what explains this gradation?

(cut)

In other words, people make graded causal judgments both when they think a cause is weak and also when they are uncertain about their causal judgment. Although work is needed to determine precisely why and when causal judgments are influenced by confidence, we have demonstrated that these effects are separable from more well-studied effects on causal judgment. This is good news for theories of causal judgment that rely on the causal strength explanation: these theories do not need to account for the effects of confidence on causal judgment to be useful in explaining other effects. That is, there is no need for major revisions in how we think about causal judgments. Nevertheless, we think our results have important implications for these theories, which we outline below.

Monday, March 14, 2022

Can you be too moral?


Tim Dean
TEDx Sydney

Interesting introduction to moral certainty and dichotomous thinking.

One of the biggest challenges of our time is not people without morals, according to philosopher Tim Dean. It is often those with unwavering moral convictions who are the most dangerous. Tim challenges us to change the way we think about morality, right and wrong, to be more adaptable in order to solve the new and emerging problems of our modern lives. Tim Dean is a Sydney-based philosopher and science writer. He is the author of How We Became Human, a book about how our evolved moral minds are out of step with the modern world. He has a Doctorate in philosophy from the University of New South Wales on the evolution of morality and has expertise in ethics, philosophy of biology and critical thinking. 

Sunday, February 20, 2022

The Pervasive Impact of Ignorance

Kirfel, L., & Phillips, J. S. 
(2022, January 16). 
https://doi.org/10.31234/osf.io/xbrnj

Abstract

Norm violations have been demonstrated to impact a wide range of seemingly non-normative judgments. Among other things, when agents' actions violate prescriptive norms they tend to be seen as having done those actions more freely, as having acted more intentionally, as being more of a cause of subsequent outcomes, and even as being less happy. The explanation of this effect continues to be debated, with some researchers appealing to features of actions that violate norms, and other researchers emphasizing the importance of agents' mental states when acting. Here, we report the results of two large-scale experiments that replicate and extend twelve of the studies that originally demonstrated the pervasive impact of norm violations. In each case, we build on the pre-existing experimental paradigms to additionally manipulate whether the agents knew that they were violating a norm while holding fixed the action done. We find evidence for a pervasive impact of ignorance: the impact of norm violations on non-normative judgments depends largely on the agent knowing that they were violating a norm when acting. Moreover, we find evidence that the reduction in the impact of normality is underpinned by people's counterfactual reasoning: people are less likely to consider an alternative to the agent’s action if the agent is ignorant. We situate our findings in the wider debate around the role of normality in people's reasoning.

General Discussion

Motivated Moral Cognition

On the one hand, blame-based accounts may try and use this discovery to their ad-vantage by arguing that an agent’s knowledge is directly relevant to whether they should be blamed (Cushman et al., 2008; Cushman, Sheketoff, Wharton, & Carey, 2013; Laurent, Nuñez, & Schweitzer, 2015; Yuill & Perner, 1988), and thus that these effects reflect that theimpact of normality arises from the motivation to blame or hold agents responsible for theiractions (Alicke & Rose, 2012; Livengood et al., 2017; Samland & Waldmann, 2016). For example, the tendency to report that agents who bring about harm acted intentionally may serve to corroborate people’s desire to judge the agent’s behaviour negatively (Nadelhoffer, 2004; Rogers et al., 2019). Motivated accounts differ in terms of exactly which moral judgment is argued to be at stake, i.e. whether norm-violations elicit a desire to punish (Clarket al., 2014), to blame (Alicke & Rose, 2012; Hindriks et al., 2016), to hold accountable (Samland & Waldmann, 2016) or responsible (Sytsma, 2020a), and whether its influence works in form of a cognitive bias (Alicke, 2000), or a more affective response (Nadelhoffer,2004). Common to all, however, is the assumption that it is the impetus to morally condemn the norm-violating agent that underlies exaggerated attributions of specific properties, from free will to intentional action.

Our study puts an important constraint on how the normative judgment that motivated reasoning accounts assume might work. To account for our findings, motivated ac-counts cannot generally appeal to whether an agent’s action violated a clear norm, but have to take into account whether people would all-things-considered blame the agent (Driver,2017). In that sense, the mere violation of a norm must not, itself, suffice to trigger the relevant blame response. Rather, the perception of this norm violation must occur in con-junction with an assessment of the epistemic state of the agent such that the relevant motivated reasoning is only elicited when the agent is aware of the immorality of their action. For example, Alicke and Rose’s 2012 Culpable Control Model holds that immediate negative evaluative reactions of an agent’s behaviours often cause people to interpret all other agential features in a way that justifies blaming the agent. Such accounts face a challenge. On the one hand, they seem committed to the idea that people should discount the agent’s ignorance to support their immediate negative evaluation of the harm causing actions. On the other hand, they need to account for the fact that people seem to be sensitive to fine-grained epistemic features of the agent when forming their negative evaluation of the harm causing action.

Sunday, January 2, 2022

Towards a Theory of Justice for Artificial Intelligence

Iason Gabriel
Forthcoming in Daedelus vol. 151, 
no. 2, Spring 2022

Abstract 

This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes -- something that requires specific attention be paid to the impact they have on the worst-off members of society.

Here is the conclusion:

Second, the demand for public justification in the context of AI deployment may well extend beyond the basic structure. As Langdon Winner argues, when the impact of a technology is sufficiently great, this fact is, by itself, sufficient to generate a free-standing requirement that citizens be consulted and given an opportunity to influence decisions.  Absent such a right, citizens would cede too much control over the future to private actors – something that sits in tension with the idea that they are free and equal. Against this claim, it might be objected that it extends the domain of political justification too far – in a way that risks crowding out room for private experimentation, exploration, and the development of projects by citizens and organizations. However, the objection rests upon the mistaken view that autonomy is promoted by restricting the scope of justificatory practices to as narrow a subject matter as possible. In reality this is not the case: what matters for individual liberty is that practices that have the potential to interfere with this freedom are appropriately regulated so that infractions do not come about. Understood in this way, the demand for public justification stands in opposition not to personal freedom but to forms of unjust imposition.

The call for justice in the context of AI is well-founded. Looked at through the lens of distributive justice, key principles that govern the fair organization of our social, political and economic institutions, also apply to AI systems that are embedded in these practices. One major consequence of this is that liberal and egalitarian norms of justice apply to AI tools and services across a range of contexts. When they are integrated into society’s basic structure, these technologies should support citizens’ basic liberties, promote fair equality of opportunity, and provide the greatest benefit to those who are worst-off. Moreover, deployments of AI outside of the basic structure must still be compatible with the institutions and values that justice requires. There will always be valid reasons, therefore, to consider the relationship of technology to justice when it comes to the deployment of AI systems.

Friday, December 24, 2021

It's not what you did, it's what you could have done

Bernhard, R. M., LeBaron, H., & Phillips, J. S. 
(2021, November 8).

Abstract

We are more likely to judge agents as morally culpable after we learn they acted freely rather than under duress or coercion. Interestingly, the reverse is also true: Individuals are more likely to be judged to have acted freely after we learn that they committed a moral violation. Researchers have argued that morality affects judgments of force by making the alternative actions the agent could have done instead appear comparatively normal, which then increases the perceived availability of relevant alternative actions. Across four studies, we test the novel predictions of this account. We find that the degree to which participants view possible alternative actions as normal strongly predicts their perceptions that an agent acted freely. This pattern holds both for perceptions of descriptive normality (whether the actions are unusual) and prescriptive normality (whether the actions are good) and persists even when what is actually done is held constant. We also find that manipulating the prudential value of alternative actions or the degree to which alternatives adhere to social norms, has a similar effect to manipulating whether the actions or their alternatives violate moral norms, and that both effects are explained by changes in the perceived normality of the alternatives. Finally, we even find that evaluations of both the prescriptive and descriptive normality of alternative actions explains force judgments in response to moral violations. Together, these results suggest that across contexts, participants’ force judgments depend not on the morality of the actual action taken, but on the normality of possible alternatives. More broadly, our results build on prior work that suggests a unifying role of normality and counterfactuals across many areas of high-level human cognition.

(cut)

Why does descriptive normality matter for force judgments?

Our results also suggest that the descriptive normality of alternatives may be at least as important as the prescriptive normality. Why would this be the case? One possibility is that evaluations of the descriptive normality of alternatives may be influencing participants’ perceptions of the alternatives’ value. After all, actions that are taken by most people are often done so because they are the best choice. Likewise, morally wrong actions are much less commonplace than morally neutral or good ones. Therefore, participants may be inferring some kind of lower prescriptive value inherent in unusual actions, even in cases where we took great lengths to eliminate differences in prescriptive value.

Friday, December 10, 2021

How social relationships shape moral wrongness judgments

Earp, B.D., McLoughlin, K.L., Monrad, J.T. et al. 
Nat Commun 12, 5776 (2021).

Abstract

Judgments of whether an action is morally wrong depend on who is involved and the nature of their relationship. But how, when, and why social relationships shape moral judgments is not well understood. We provide evidence to address these questions, measuring cooperative expectations and moral wrongness judgments in the context of common social relationships such as romantic partners, housemates, and siblings. In a pre-registered study of 423 U.S. participants nationally representative for age, race, and gender, we show that people normatively expect different relationships to serve cooperative functions of care, hierarchy, reciprocity, and mating to varying degrees. In a second pre-registered study of 1,320 U.S. participants, these relationship-specific cooperative expectations (i.e., relational norms) enable highly precise out-of-sample predictions about the perceived moral wrongness of actions in the context of particular relationships. In this work, we show that this ‘relational norms’ model better predicts patterns of moral wrongness judgments across relationships than alternative models based on genetic relatedness, social closeness, or interdependence, demonstrating how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the General Discussion

From a theoretical perspective, one aspect of our current account that requires further attention is the reciprocity function. In contrast with the other three functions considered, relationship-specific prescriptions for reciprocity did not significantly predict moral judgments for reciprocity violations. Why might this be so? One possibility is that the model we tested did not distinguish between two different types of reciprocity. In some relationships, such as those between strangers, acquaintances, or individuals doing business with one another, each party tracks the specific benefits contributed to, and received from, the other. In these relationships, reciprocity thus takes a tit-for-tat form in which benefits are offered and accepted on a highly contingent basis. This type of reciprocity is transactional, in that resources are provided, not in response to a real or perceived need on the part of the other, but rather, in response to the past or expected future provision of a similarly valued resource from the cooperation partner. In this, it relies on an explicit accounting of who owes what to whom, and is thus characteristic of so-called “exchange” relationships.

In other relationships, by contrast, such as those between friends, family members, or romantic partners – so-called “communal” relationships – reciprocity takes a different form: that of mutually expected responsiveness to one another’s needs. In this form of reciprocity, each party tracks the other’s needs (rather than specific benefits provided) and strives to meet these needs to the best of their respective abilities, in proportion to the degree of responsibility each has assumed for the other’s welfare. Future work on moral judgments in relational context should distinguish between these two types of reciprocity: that is, mutual care-based reciprocity in communal relationships (when both partners have similar needs and abilities) and tit-for-tat reciprocity between “transactional” cooperation partners who have equal standing or claim on a resource.

Wednesday, September 29, 2021

A new framework for the psychology of norms

Westra, E., & Andrews, K. (2021, July 9).

Abstract

Social Norms – rules that dictate which behaviors are appropriate, permissible, or obligatory in different situations for members of a given community – permeate all aspects of human life. Many researchers have sought to explain the ubiquity of social norms in human life in terms of the psychological mechanisms underlying their acquisition, conformity, and enforcement. Existing theories of the psychology of social norms appeal to a variety of constructs, from prediction-error minimization, to reinforcement learning, to shared intentionality, to evolved psychological adaptations. However, most of these accounts share what we call the psychological unity assumption, which holds that there is something psychologically distinctive about social norms, and that social norm adherence is driven by a single system or process. We argue that this assumption is mistaken. In this paper, we propose a methodological and conceptual framework for the cognitive science of social norms that we call normative pluralism. According to this framework, we should treat norms first and foremost as a community-level pattern of social behavior that might be realized by a variety of different cognitive, motivational, and ecological mechanisms. Norm psychologists should not presuppose that social norms are underpinned by a unified set of processes, nor that there is anything particularly distinctive about normative cognition as such. We argue that this pluralistic approach offers a methodologically sound point of departure for a fruitful and rigorous science of norms.

Conclusion

The central thesis of this paper –what we’ve called normative pluralism–is that we should not take the psychological unity of social norms for granted.Social norms might be underpinned by a domain-specific norm system or by a single type of cognitive process, but they might also be the product of many different processes. In our methodological proposal, we outlined a novel, non-psychological conception of social norms –what we’ve called normative regularities –and defined the core components of a psychology of norms in light of this construct. In our empirical proposal, we argued that thus defined, social norms emerge from a heterogeneous set of cognitive, affective, and ecological mechanisms.

Thinking about social norms in this way will undoubtedly make the cognitive science of norms more complex and messy. If we are correct, however, then this will simply be a reflection of the complexity and messiness of social norms themselves. Taking a pluralistic approach to social norms allows us to explore the potential variability inherent to norm-governed behavior, which can help us to better understand how social norms shape our lives, and how they manifest themselves throughout the natural world.

Wednesday, August 18, 2021

The Shape of Blame: How statistical norms impact judgments of blame and praise

Bostyn, D. H., & Knobe, J. (2020, April 24). 
https://doi.org/10.31234/osf.io/2hca8

Abstract

For many types of behaviors, whether a specific instance of that behavior is either blame or praiseworthy depends on how much of the behavior is done or how people go about doing it. For instance, for a behavior such as “replying quickly to emails”, whether a specific reply is blame or praiseworthy will depend on the timeliness of that reply. Such behaviors lie on a continuum in which part of the continuum is praiseworthy (replying quickly) and another part of the continuum is blameworthy (replying late). As praise shifts towards blame along such behavioral continua, the resulting blame-praise curve must have a specific shape. A number of questions therefore arise. What determines the shape of that curve? And what determines “the neutral point”, i.e., the point along a behavioral continuum at which people neither blame nor praise? Seven studies explore these issues, focusing specifically on the impact of statistical information, and provide evidence for a hypothesis we call the “asymmetric frequency hypothesis.”

From the Discussion

Asymmetric frequency and moral cognition

The results obtained here appear to support the asymmetric frequency hypothesis. So far, we have summarized this hypothesis as “People tend perceive frequent behaviors as not blameworthy.” But how exactly is this hypothesis best understood?Importantly, the asymmetric frequency effect does not imply that whenever a behavior becomes more frequent, the associated moral judgment will shift towards the neutral. Behaviors that are considered to be praiseworthy do not appear to become more neutral simply because they become more frequent. The effect of frequency only appears to occur when a behavior is blameworthy, which is why we dubbed it an asymmetric effect.An enlightening historical example in this regard is perhaps the “gay revolution” (Faderman, 2015). As knowledge of the rate of homosexuality has spread across society and people have become more familiar with homosexuality within their own communities, moral norms surrounding homosexuality have shifted from hostility to increasing acceptance (Gallup 2019). Crucially, however, those who already lauded others for having a loving homosexual relation did not shift their judgment towards neutral indifference over the same time period. While frequency mitigates blameworthiness, it does not cause a general shift towards neutrality. Even when everyone does the right thing, it does not lose its moral shine.

Friday, July 30, 2021

The Impact of Ignorance Beyond Causation: An Experimental Meta-Analysis

L. Kirfel & J. P. Phillips
Manuscript

Abstract

Norm violations have been demonstrated to impact a wide range of seemingly non-normative judgments. Among other things, when agents’ actions violate prescriptive norms they tend to be seen as having done those actions more freely, as having acted more intentionally, as being more of a cause of subsequent outcomes, and even as being less happy. The explanation of this effect continue to be debated, with some researchers appealing to features of actions that violate norms, and other researcher emphasizing the importance of agents’ mental states when acting. Here, we report the results of a large-scale experiment that replicates and extends twelve of the studies that originally demonstrated the pervasive impact of norm violations. In each case, we build on the pre-existing experimental paradigms to additionally manipulate whether the agents knew that they were violating a norm while holding fixed the action done. We find evidence for a pervasive impact of ignorance: the impact of norm violations on nonnormative judgments depends largely on the agent knowing that they were violating a norm when acting.

From the Discussion

Norm violations have been previously demonstrated to influence a wide range of intuitive judgments, including judgments of causation, freedom, happiness, doing vs. allowing, mental state ascriptions, and modal claims. A continuing debate centers on why normality has such a pervasive impact, and whether one should attempt to offer a unified explanation of these various effects (Hindriks, 2014).

At the broadest level, the current results demonstrate that the pervasive impact of normality likely warrants a unified explanation at some level. Across a wide range of intuitive judgments and highly different manipulations of an agents’ knowledge, we found that the impact of normality on nonnormative judgments was diminished when the agent did not know that they were violating a norm. That is, we found evidence for a correspondingly pervasive impact of ignorance.

Wednesday, May 12, 2021

How pills undermine skills: Moralization of cognitive enhancement and causal selection

E. Mihailov, B. R. López, F. Cova & I. R. Hannikainen
Consciousness and Cognition
Volume 91, May 2021, 103120

Abstract

Despite the promise to boost human potential and wellbeing, enhancement drugs face recurring ethical scrutiny. The present studies examined attitudes toward cognitive enhancement in order to learn more about these ethical concerns, who has them, and the circumstances in which they arise. Fairness-based concerns underlay opposition to competitive use—even though enhancement drugs were described as legal, accessible and affordable. Moral values also influenced how subsequent rewards were causally explained: Opposition to competitive use reduced the causal contribution of the enhanced winner’s skill, particularly among fairness-minded individuals. In a follow-up study, we asked: Would the normalization of enhancement practices alleviate concerns about their unfairness? Indeed, proliferation of competitive cognitive enhancement eradicated fairness-based concerns, and boosted the perceived causal role of the winner’s skill. In contrast, purity-based concerns emerged in both recreational and competitive contexts, and were not assuaged by normalization.

Highlights

• Views on cognitive enhancement reflect both purity and fairness concerns.

• Fairness, but not purity, concerns are surmounted by normalizing use.

• Moral opposition to pills undermines user’s perceived skills.

From the Discussion

In line with a growing literature on causal selection (Alicke, 1992; Icard et al., 2017; Kominsky et al. 2015), judgments of the enhanced user’s skill aligned with participants’ moral attitudes. Participants who held permissive attitudes were more likely to causally attribute success to agents’ skill and effort, while participants who held restrictive attitudes were more likely to view the pill as causally responsible. This association resulted in stronger denial of competitive users’ talent and ability, particularly among fairness-minded individuals. 

The moral foundation of purity, comprising norms related to spiritual sanctity and bodily propriety, and which appeals predominantly to political conservatives (Graham et al., 2009), also predicted attitudes toward enhancement. Purity-minded individuals were more likely to condemn enhancement users, regardless of whether cognitive enhancement was normal or rare. This categorical opposition may elucidate the origin of conservative bioethicists’ (e.g., Kass, 2003) attitudes toward human enhancement: i.e., in self-directed norms regulating the proper care of one’s own body (see also Koverola et al., 2021). Finally, whereas explicit reasoning about interpersonal concerns and the unjust treatment of others accompanied fairness-based opposition, our qualitative analyses data did not reveal a cogent, purity-based rationale—which could be interpreted as evidence that purity-based opposition is not guided by moral reasoning to the same degree (Mihailov, 2016). 

Saturday, February 27, 2021

Following your group or your morals? The in-group promotes immoral behavior while the out-group buffers against it

Vives, M., Cikara, M., & FeldmanHall, O. 
(2021, February 5). 
https://doi.org/10.31234/osf.io/jky9h

Abstract

People learn by observing others, albeit not uniformly. Witnessing an immoral behavior causes observers to commit immoral actions, especially when the perpetrator is part of the in-group. Does conformist behavior hold when observing the out-group? We conducted three experiments (N=1,358) exploring how observing an (im)moral in-/out-group member changed decisions relating to justice: Punitive, selfish, or dishonest choices. Only immoral in-groups increased immoral actions, while the same immoral behavior from out-groups had no effect. In contrast, a compassionate or generous individual did not make people more moral, regardless of group membership. When there was a loophole to deny cheating, neither an immoral in-/out-group member changed dishonest behavior. Compared to observing an honest in-group member, people become more honest themselves after observing an honest out-group member, revealing that out-groups can enhance morality. Depending on the severity of the moral action, the in-group licenses immoral behavior while the out-group buffers against it.

General discussion

Choosing compassion over punishment, generosity over selfishness, and honesty over dishonesty is the byproduct of many factors, including virtue-signaling, norm compliance, and self-interest. There are times, however, when moral choices are shaped by the mere observation of what others do in the same situation (Gino & Galinsky, 2012; Nook et al., 2016). Here, we investigated how moral decisions are shaped by one’s in-or out-group—a factor known to shift willingness to conform (Gino et al., 2009). Conceptually replicating past research (Gino et al., 2009), results reveal that immoral behaviors were only transmitted by the in-group: while participants became more punitive or selfish after observing a punitive or selfish in-group, they did not increase their immoral behavior after observing an immoral out-group (Experiments 1 & 2). However, when the same manipulation was deployed in a context where the immoral acts could not be traced, neither the dishonest in- nor out-group member produced any behavioral shifts in our subjects (Experiment 3). These results suggest that immoral behaviors are not transmitted equally by all individuals. Rather, they are more likely to be transmitted within groups than between groups. In contrast, pro-social behaviors were rarely transmitted by either group. Participants did not become more compassionate or generous after observing a compassionate or generous in-or out-group member (Experiments 1 & 2). We only find modifications for prosocial behavior when participants observe another participant behaving in a costly honest manner, and this was modulated by group membership. Witnessing an honest out-group member attenuated the degree to which participants themselves cheated compared to participants who witnessed an honest in-group member (see Table 1 for a summary of results). Together, these findings suggest that the transmission of moral corruption is both determined by group membership and is sensitive to the degree of moral transgression. Namely, given the findings from Experiment 3, in-groups appear to license moral corruption, while virtuous out-groups can buffer against it.

(Italics added.)