Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Counterfactual. Show all posts
Showing posts with label Counterfactual. Show all posts

Saturday, April 1, 2023

The effect of reward prediction errors on subjective affect depends on outcome valence and decision context

Forbes, L., & Bennett, D. (2023, January 20). 
https://doi.org/10.31234/osf.io/v86bx

Abstract

The valence of an individual’s emotional response to an event is often thought to depend on their prior expectations for the event: better-than-expected outcomes produce positive affect and worse-than-expected outcomes produce negative affect. In recent years, this hypothesis has been instantiated within influential computational models of subjective affect that assume the valence of affect is driven by reward prediction errors. However, there remain a number of open questions regarding this association. In this project, we investigated the moderating effects of outcome valence and decision context (Experiment 1: free vs. forced choices; Experiment 2: trials with versus trials without counterfactual feedback) on the effects of reward prediction errors on subjective affect. We conducted two large-scale online experiments (N = 300 in total) of general-population samples recruited via Prolific to complete a risky decision-making task with embedded high-resolution sampling of subjective affect. Hierarchical Bayesian computational modelling revealed that the effects of reward prediction errors on subjective affect were significantly moderated by both outcome valence and decision context. Specifically, after accounting for concurrent reward amounts we found evidence that only negative reward prediction errors (worse-than-expected outcomes) influenced subjective affect, with no significant effect of positive reward prediction errors (better-than-expected outcomes). Moreover, these effects were only apparent on trials in which participants made a choice freely (but not on forced-choice trials) and when counterfactual feedback was absent (but not when counterfactual feedback was present). These results deepen our understanding of the effects of reward prediction errors on subjective affect.

From the General Discussion section

Our findings were twofold: first, we found that after accounting for the effects of concurrent reward amounts (gains/losses of points) on affect, the effects of RPEs were subtler and more nuanced than has been previously appreciated. Specifically, contrary to previous research, we found that only negative RPEs influenced subjective affect within our task, with no discernible effect of positive RPEs.  Second, we found that even the effect of negative RPEs on affect was dependent on the decision context within which the RPEs occurred.  We manipulated two features of decision context (Experiment 1: free-choice versus forced-choice trials; Experiment 2: trials with counterfactual feedback versus trials without counterfactual feedback) and found that both features of decision context significantly moderated the effect of negative RPEs on subjective affect. In Experiment 1, we found that negative RPEs only influenced subjective affect in free-choice trials, with no effect of negative RPEs in forced-choice trials. In Experiment 2, we similarly found that negative RPEs only influenced subjective affect when counterfactual feedback was absent, with no effect of negative RPEs when counterfactual feedback was present. We unpack and discuss each of these results separately below.


Editor's synopsis: As with large amounts of other research, "bad" is stronger than "good" in making appraisals and decisions, in context of free (not forced) choice and no counterfactual information available.

Important data points when working with patient who are making large life decisions.

Sunday, February 20, 2022

The Pervasive Impact of Ignorance

Kirfel, L., & Phillips, J. S. 
(2022, January 16). 
https://doi.org/10.31234/osf.io/xbrnj

Abstract

Norm violations have been demonstrated to impact a wide range of seemingly non-normative judgments. Among other things, when agents' actions violate prescriptive norms they tend to be seen as having done those actions more freely, as having acted more intentionally, as being more of a cause of subsequent outcomes, and even as being less happy. The explanation of this effect continues to be debated, with some researchers appealing to features of actions that violate norms, and other researchers emphasizing the importance of agents' mental states when acting. Here, we report the results of two large-scale experiments that replicate and extend twelve of the studies that originally demonstrated the pervasive impact of norm violations. In each case, we build on the pre-existing experimental paradigms to additionally manipulate whether the agents knew that they were violating a norm while holding fixed the action done. We find evidence for a pervasive impact of ignorance: the impact of norm violations on non-normative judgments depends largely on the agent knowing that they were violating a norm when acting. Moreover, we find evidence that the reduction in the impact of normality is underpinned by people's counterfactual reasoning: people are less likely to consider an alternative to the agent’s action if the agent is ignorant. We situate our findings in the wider debate around the role of normality in people's reasoning.

General Discussion

Motivated Moral Cognition

On the one hand, blame-based accounts may try and use this discovery to their ad-vantage by arguing that an agent’s knowledge is directly relevant to whether they should be blamed (Cushman et al., 2008; Cushman, Sheketoff, Wharton, & Carey, 2013; Laurent, Nuñez, & Schweitzer, 2015; Yuill & Perner, 1988), and thus that these effects reflect that theimpact of normality arises from the motivation to blame or hold agents responsible for theiractions (Alicke & Rose, 2012; Livengood et al., 2017; Samland & Waldmann, 2016). For example, the tendency to report that agents who bring about harm acted intentionally may serve to corroborate people’s desire to judge the agent’s behaviour negatively (Nadelhoffer, 2004; Rogers et al., 2019). Motivated accounts differ in terms of exactly which moral judgment is argued to be at stake, i.e. whether norm-violations elicit a desire to punish (Clarket al., 2014), to blame (Alicke & Rose, 2012; Hindriks et al., 2016), to hold accountable (Samland & Waldmann, 2016) or responsible (Sytsma, 2020a), and whether its influence works in form of a cognitive bias (Alicke, 2000), or a more affective response (Nadelhoffer,2004). Common to all, however, is the assumption that it is the impetus to morally condemn the norm-violating agent that underlies exaggerated attributions of specific properties, from free will to intentional action.

Our study puts an important constraint on how the normative judgment that motivated reasoning accounts assume might work. To account for our findings, motivated ac-counts cannot generally appeal to whether an agent’s action violated a clear norm, but have to take into account whether people would all-things-considered blame the agent (Driver,2017). In that sense, the mere violation of a norm must not, itself, suffice to trigger the relevant blame response. Rather, the perception of this norm violation must occur in con-junction with an assessment of the epistemic state of the agent such that the relevant motivated reasoning is only elicited when the agent is aware of the immorality of their action. For example, Alicke and Rose’s 2012 Culpable Control Model holds that immediate negative evaluative reactions of an agent’s behaviours often cause people to interpret all other agential features in a way that justifies blaming the agent. Such accounts face a challenge. On the one hand, they seem committed to the idea that people should discount the agent’s ignorance to support their immediate negative evaluation of the harm causing actions. On the other hand, they need to account for the fact that people seem to be sensitive to fine-grained epistemic features of the agent when forming their negative evaluation of the harm causing action.

Friday, December 24, 2021

It's not what you did, it's what you could have done

Bernhard, R. M., LeBaron, H., & Phillips, J. S. 
(2021, November 8).

Abstract

We are more likely to judge agents as morally culpable after we learn they acted freely rather than under duress or coercion. Interestingly, the reverse is also true: Individuals are more likely to be judged to have acted freely after we learn that they committed a moral violation. Researchers have argued that morality affects judgments of force by making the alternative actions the agent could have done instead appear comparatively normal, which then increases the perceived availability of relevant alternative actions. Across four studies, we test the novel predictions of this account. We find that the degree to which participants view possible alternative actions as normal strongly predicts their perceptions that an agent acted freely. This pattern holds both for perceptions of descriptive normality (whether the actions are unusual) and prescriptive normality (whether the actions are good) and persists even when what is actually done is held constant. We also find that manipulating the prudential value of alternative actions or the degree to which alternatives adhere to social norms, has a similar effect to manipulating whether the actions or their alternatives violate moral norms, and that both effects are explained by changes in the perceived normality of the alternatives. Finally, we even find that evaluations of both the prescriptive and descriptive normality of alternative actions explains force judgments in response to moral violations. Together, these results suggest that across contexts, participants’ force judgments depend not on the morality of the actual action taken, but on the normality of possible alternatives. More broadly, our results build on prior work that suggests a unifying role of normality and counterfactuals across many areas of high-level human cognition.

(cut)

Why does descriptive normality matter for force judgments?

Our results also suggest that the descriptive normality of alternatives may be at least as important as the prescriptive normality. Why would this be the case? One possibility is that evaluations of the descriptive normality of alternatives may be influencing participants’ perceptions of the alternatives’ value. After all, actions that are taken by most people are often done so because they are the best choice. Likewise, morally wrong actions are much less commonplace than morally neutral or good ones. Therefore, participants may be inferring some kind of lower prescriptive value inherent in unusual actions, even in cases where we took great lengths to eliminate differences in prescriptive value.

Sunday, June 24, 2018

Moral hindsight for good actions and the effects of imagined alternatives to reality

Ruth M.J. Byrne and Shane Timmons
Cognition
Volume 178, September 2018, Pages 82–91

Abstract

Five experiments identify an asymmetric moral hindsight effect for judgments about whether a morally good action should have been taken, e.g., Ann should run into traffic to save Jill who fell before an oncoming truck. Judgments are increased when the outcome is good (Jill sustained minor bruises), as Experiment 1 shows; but they are not decreased when the outcome is bad (Jill sustained life-threatening injuries), as Experiment 2 shows. The hindsight effect is modified by imagined alternatives to the outcome: judgments are amplified by a counterfactual that if the good action had not been taken, the outcome would have been worse, and diminished by a semi-factual that if the good action had not been taken, the outcome would have been the same. Hindsight modification occurs when the alternative is presented with the outcome, and also when participants have already committed to a judgment based on the outcome, as Experiments 3A and 3B show. The hindsight effect occurs not only for judgments in life-and-death situations but also in other domains such as sports, as Experiment 4 shows. The results are consistent with a causal-inference explanation of moral judgment and go against an aversive-emotion one.

Highlights
• Judgments a morally good action should be taken are increased when it succeeds.
• Judgments a morally good action should be taken are not decreased when it fails.
• Counterfactuals that the outcome would have been worse amplify judgments.
• Semi-factuals that the outcome would have been the same diminish judgments.
• The asymmetric moral hindsight effect supports a causal-inference theory.

The research is here.

Wednesday, October 18, 2017

When Doing Some Good Is Evaluated as Worse Than Doing No Good at All

George E. Newman and Daylian M. Cain
Psychological Science published online 8 January 2014

Abstract

In four experiments, we found that the presence of self-interest in the charitable domain was seen as tainting: People evaluated efforts that realized both charitable and personal benefits as worse than analogous behaviors that produced no charitable benefit. This tainted-altruism effect was observed in a variety of contexts and extended to both moral evaluations of other agents and participants’ own behavioral intentions (e.g., reported willingness to hire someone or purchase a company’s products). This effect did not seem to be driven by expectations that profits would be realized at the direct cost of charitable benefits, or the explicit use of charity as a means to an end. Rather, we found that it was related to the accessibility of different counterfactuals: When someone was charitable for self-interested reasons, people considered his or her behavior in the absence of self-interest, ultimately concluding that the person did not behave as altruistically as he or she could have. However, when someone was only selfish, people did not spontaneously consider whether the person could have been more altruistic.

The article is here.