Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, March 28, 2022

Do people understand determinism? The tracking problem for measuring free will beliefs

Murray, S., Dykhuis, E., & Nadelhoffer, T.
(2022, February 8). 
https://doi.org/10.31234/osf.io/kyza7

Abstract

Experimental work on free will typically relies on using deterministic stimuli to elicit judgments of free will. We call this the Vignette-Judgment model. In this paper, we outline a problem with research based on this model. It seems that people either fail to respond to the deterministic aspects of vignettes when making judgments or that their understanding of determinism differs from researcher expectations. We provide some empirical evidence for a key assumption of the problem. In the end, we argue that people seem to lack facility with the concept of determinism, which calls into question the validity of experimental work operating under the Vignette-Judgment model. We also argue that alternative experimental paradigms are unlikely to elicit judgments that are philosophically relevant to questions about the metaphysics of free will.

Error and judgment

Our results show that people make several errors about deterministic stimuli used to elicit judgments about free will and responsibility. Many participants seem to conflate determinism with different  constructs  (bypassing  or  fatalism) or mistakenly interpret the implications of deterministic constraints on agents (intrusion).

Measures of item invariance suggest that participants were not responding differently to error measures across different vignettes. Hence, responses to error measures cannot be explained exclusively in terms of differences in vignettes, but rather seem to reflect participants’ mistaken judgments about determinism. Further, these mistakes are associated with significant differences in judgments about free will. Some of the patterns are predictable: participants who conflate determinism with bypassing attribute less free will to individuals in deterministic scenarios, while participants who import intrusion into deterministic scenarios attribute greater free will. This makes sense. As participants perceive mental states to be less causally efficacious or individuals as less ultimately in control of their decisions, free will is diminished. However, as people perceive more indeterminism, free will is amplified.

Additionally, we found that errors of intrusion are stronger than errors of bypassing or fatalism. Because bypassing errors are associated with diminished judgments of free will and intrusion errors are associated with amplified judgments, then, if all three errors were equal in strength, we would expect a linear relationship between different errors: individuals who make bypassing errors would have the lowest average judgments, individuals who make intrusion errors would have the highest average judgments, and people who make both errors would be in the middle (as both errors would cancel each other out). We did not observe this relationship. Instead, participants who make intrusion errors are statistically indistinguishable from each other, no matter what other kinds of errors they make.

Thus, errors of intrusion seem to trump others in the process of forming judgments of free will.  Thus, the errors people make are not incidentally related to their judgments. Instead, there are significant associations between people’s inferential errors about determinism and how they attribute free will and responsibility. This evidence supports our claim that people make several errors about the nature and implications of determinism.

Sunday, March 27, 2022

Observers penalize decision makers whose risk preferences are unaffected by loss–gain framing

Dorison, C. A., & Heller, B. H. (2022). 
Journal of Experimental Psychology: 
General. Advance online publication.

Abstract

A large interdisciplinary body of research on human judgment and decision making documents systematic deviations between prescriptive decision models (i.e., how individuals should behave) and descriptive decision models (i.e., how individuals actually behave). One canonical example is the loss–gain framing effect on risk preferences: the robust tendency for risk preferences to shift depending on whether outcomes are described as losses or gains. Traditionally, researchers argue that decision makers should always be immune to loss–gain framing effects. We present three preregistered experiments (N = 1,954) that qualify this prescription. We predict and find that while third-party observers penalize decision makers who make risk-averse (vs. risk-seeking) choices when choice outcomes are framed as losses, this result reverses when outcomes are framed as gains. This reversal holds across five social perceptions, three decision contexts, two sample populations of United States adults, and with financial stakes. This pattern is driven by the fact that observers themselves fall victim to framing effects and socially derogate (and financially punish) decision makers who disagree. Given that individuals often care deeply about their reputation, our results challenge the long-standing prescription that they should always be immune to framing effects. The results extend understanding not only for decision making under risk, but also for a range of behavioral tendencies long considered irrational biases. Such understanding may ultimately reveal not only why such biases are so persistent but also novel interventions: our results suggest a necessary focus on social and organizational norms.

From the General Discussion

But what makes an optimal belief or choice? Here, we argue that an expanded focus on the goals decision makers themselves hold (i.e., reputation management) questions whether such deviations from rational-agent models should always be considered suboptimal. We test this broader theorizing in the context of loss-gain framing effects on risk preferences not because we think the psychological dynamics at play are
unique to this context, but rather because such framing effects have been uniquely influential for both academic discourse and applied interventions in policy and organizations. In fact, the results hold preliminary implications not only for decision making under risk, but also for extending understanding of a range of other behavioral tendencies long considered irrational biases in the research literature on judgment and decision making (e.g., sunk cost bias; see Dorison, Umphres, & Lerner, 2021).

An important clarification of our claims merits note. We are not claiming that it is always rational to be biased just because others are. For example, it would be quite odd to claim that someone is rational for believing that eating sand provides enough nutrients to survive, simply because others may like them for holding this belief or because others in their immediate social circle hold this belief. In this admittedly bizarre case, it would still be clearly irrational to attempt to subsist on sand, even if there are reputational advantages to doing so—that is, the costs substantially outweigh the reputational benefits. In fact, the vast majority of framing effect studies in the lab do not have an explicit reputational/strategic component at all. 

Saturday, March 26, 2022

Anticipation of future cooperation eliminates minimal ingroup bias in children and adults

Misch, A., Paulus, M., & Dunham, Y. (2021). 
Journal of Experimental Psychology: 
General, 150(10), 2036–2056.

Abstract

From early in development, humans show a strong preference for members of their own groups, even in so-called minimal (i.e., arbitrary and unfamiliar) groups, leading to tremendous negative consequences such as outgroup discrimination and derogation. A better understanding of the underlying processes driving humans’ group mindedness is an important first step toward fighting discrimination and inequality on a bigger level. Based on the assumption that minimal group allocation elicits the anticipation of future within-group cooperation, which in turn elicits ingroup preference, we investigate whether changing participants’ anticipation from within-group cooperation to between-group cooperation reduces their ingroup bias. In the present set of five studies (overall N = 465) we test this claim in two different populations (children and adults), in two different countries (United States and Germany), and in two kinds of groups (minimal and social group based on gender). Results confirm that changing participants’ anticipation of who they will cooperate with from ingroup to outgroup members significantly reduces their ingroup bias in minimal groups, though not for gender, a non-coalitional group. In summary, these experiments provide robust evidence for the hypothesis that children and adults encode minimal group membership as a marker for future collaboration. They show that experimentally manipulating this expectation can eliminate their minimal ingroup bias. This study sheds light on the underlying cognitive processes in intergroup behavior throughout development and opens up new avenues for research on reducing ingroup bias and discrimination.

From the General Discussion

The present set of studies advances the field in several important ways. First, it summarizes and tests a plausible theoretical framework for the formation of ingroup bias in the minimal group paradigm, thereby building on accounts that explain the origins of categorization based on allegiances and coalitions (e.g., Kurzban et al., 2001). Drawing on both evolutionary assumptions (Smith, 2003; West, Griffin, & Gardner, 2007; West, El Mouden, & Gardner, 2011) and social learning accounts (e.g., Bigler & Liben, 2006; 2007), our explanation focuses on interdependence and cooperation(Balliet et al., 2014; Pietraszewski, 2013; 2020; Yamagishi & Kiyonari, 2000).It extends these onto the formation of intergroup bias in attitudes: Results of our studies support the hypotheses that the allocation in a minimal group paradigm elicits the anticipation of cooperation, and that the anticipation of cooperation is one of the key factors in the formation of ingroup bias, as evident in the robust results across 4 experiments and several different measures. We therefore concur with the claim that the minimal group paradigm is not so minimal at all (Karp et al.,1993), as it (at least) elicits the expectation to collaborate.

Furthermore, our results show that the anticipation of cooperation alone is already sufficient to induce (and reduce) ingroup bias (Experiment 2). This extends previous research emphasizing the importance of the cooperative activity (e.g., Gaertner et al., 1990; Sherif et al., 1961),and highlights the role of the cognitive processes involved in cooperative behavior.  In our experiment, the cooperative activity in itself had no additional effect on the formation of ingroup bias. However, it is important to note the cooperation was operationalized here in a minimal way, and it is possible that a more direct operationalization and a more interactive experience of cooperation with the other children might have a stronger effect on children's attitudes, a subject for further research.

Friday, March 25, 2022

How development and culture shape intuitions about prosocial obligations

Marshall, J., Gollwitzer, A., et al. (2022).
Journal of experimental psychology. 
General, 10.1037/xge0001136. 
Advance online publication.

Abstract

Do children, like most adults, believe that only kin and close others are obligated to help one another? In two studies (total N = 1140), we examined whether children (∼5- to ∼10-yos) and adults across five different societies consider social relationship when ascribing prosocial obligations. Contrary to the view that such discriminations are a natural default in human reasoning, younger children in the United States (Studies 1 and 2) and across cultures (Study 2) generally judged everyone-parents, friends, and strangers-as obligated to help someone in need. Older children and adults, on the other hand, tended to exhibit more discriminant judgments. They considered parents more obligated to help than friends followed by strangers-although this effect was stronger in some cultures than others. Our findings suggest that children's initial sense of prosocial obligation in social-relational contexts starts out broad and generally becomes more selective over the course of development.

From the General Discussion

Other than urban versus rural, our cross-cultural samples varied on a variety of dimensions, including (but not limited to) Westernization, collectivism versus individualism, and SES. Again, because of our samples do not vary systematically on any of these dimensions, we cannot directly examine the role of certain cultural factors in the development of prosocial obligation judgments. But we do suspect that variation in societal values related to collectivism and individualism may shape children’s emerging sense of prosocial obligation. This proposal is motivated by the finding that participants in more collectivistic societies(Japan, India, Uganda) exhibited broader senses of obligation than  those  in  more  individualistic societies (Germany,  United  States) (see Hofstede, 2011; Rarick et al., 2013).

How would collectivism and individualism shape the development of our sense of obligation?After all, collectivism in a theoretical sense tends to emphasize obligations to the group—not  necessarily to  strangers  (e.g.,  Brewer  &  Chen,  2007). It is possible, however,  that participants across societies view strangers in our stimuli as part of the cultural in-group. After all, the  characters in the stimuli are all the  same  ethnicity and also presumably live in the  same community. The variance in responses we find, then, might be variance in the degree to which they consider individuals obligated to help in-group strangers—and this in turn, may depend on cultural values, such as collectivism. Future research is best suited to address this question by examining how  group  membership  and  social relationship  independently  impact the development of obligation judgments across societies.

Regardless of which societal values ultimately impact children’s sense of obligation, the findings raise the question of how this process occurs during childhood.Adults and trusted others may explicitly or implicitly teach children about societal values either via testimony or observation(e.g., Maccoby, 2007; Pratty & Hardy, 2014; see Dahl, 2019 for a useful review). Through this process, children may then absorb this information, which ultimately alters children’s obligation judgments. Alternatively, and in line with more of a Piagetian constructivist view(Piaget, 1932), children may update  their  beliefs  about  obligation through  exploring how  individuals  in  their community act toward one another and eliciting information about obligation from others (Dahl et al., 2018; Turiel, 2015, 1983).We hope future research will investigate these questions in greater detail. 

Thursday, March 24, 2022

Proposal for Revising the Uniform Determination of Death Act

Hastings Bioethics Center
Originally posted 18 FEB 22

Organ transplantation has saved many lives in the past half-century, and the majority of postmortem organ donations have occurred after a declaration of death by neurological criteria, or brain death. However, inconsistencies between the biological concept of death and the diagnostic protocols used to determine brain death–as well as questions about the underlying assumptions of brain death–have led to a justified reassessment of the legal standard of death. We believe that the concept of brain death, though flawed in its present application, can be preserved and promoted as a pathway to organ donation, but only after particular changes are made in the medical criteria for its diagnosis. These changes should precede changes in the Uniform Determination of Death Act (UDDA).

The UDDA, approved in 1981, provides a legal definition of death, which has been adopted in some form by all 50 states. It says that death can be defined as the irreversible cessation of circulatory and respiratory functions or of brain functions. The act defines brain death as “irreversible cessation of all functions of the entire brain, including the brainstem.” This description is based on a widely held assumption at the time that the brain is the master integrator of the body, such that when it ceases to function, the body would no longer be able to maintain integrated functions. It was presumed that this would result in both cardiac and pulmonary arrest and the death of the body as a whole. Now that assumption has been called into question by exceptional cases of individuals on ventilators who were declared brain dead but who continued to have function in the hypothalamus. 

(cut)

Revision of the UDDA should first defer to a revision of the guidelines. Clinical criteria for the diagnosis of “cessation of all functions of the entire brain” must include all pertinent functions, including hypothalamic functions such as hormone release and regulation of temperature and blood pressure, to avoid the specter of neurologic recovery in those who fulfill the current clinical criteria for the diagnosis of brain death.

It is likely that the failure to account for a full set of pertinent brain functions has led to inconsistent diagnoses and conflicting results. Such inconsistencies, although well-documented in a number of cases, may have been even more frequent but unrecognized because declaration of brain death is often a self-fulfilling prophecy: rarely do any life-sustaining interventions continue after the diagnosis is made.

To be consistent, transparent, and accurate, the cessation of function in both the cardiopulmonary and the neurological standard of the UDDA should be described as permanent (i.e., no reversal will be attempted) rather than irreversible (i.e., no reversal is possible). We recognize additional challenges in complying with the UDDA requirements that these cessation criteria for brain death include “all functions” of the “entire brain.” In the absence of universally accepted and easily implemented testing criteria, there may be real problems with being in perfect compliance with these legal criteria in spite of being in perfect compliance with the currently published medical guidelines. If the concept of brain death is philosophically valid, as we think is defensible, then the diagnostic guidelines should be corrected before any attempt is made to correct the UDDA. They must then “say what they mean and mean what they say” to eliminate any possibility of patients with persistent evidence of brain function, including hypothalamic function, being erroneously declared brain dead.

Wednesday, March 23, 2022

Moral Injury, Traumatic Stress, and Threats to Core Human Needs in Health-Care Workers: The COVID-19 Pandemic as a Dehumanizing Experience

Hagerty, S. L., & Williams, L. M. (2022)
Clinical Psychological Science. 
https://doi.org/10.1177/21677026211057554

Abstract

The pandemic has threatened core human needs. The pandemic provides a context to study psychological injury as it relates to unmet basic human needs and traumatic stressors, including moral incongruence. We surveyed 1,122 health-care workers from across the United States between May 2020 and August 2020. Using a mixed-methods design, we examined moral injury and unmet basic human needs in relation to traumatic stress and suicidality. Nearly one third of respondents reported elevated symptoms of psychological trauma, and the prevalence of suicidal ideation among health-care workers in our sample was roughly 3 times higher than in the general population. Moral injury and loneliness predict greater symptoms of traumatic stress and suicidality. We conclude that dehumanization is a driving force behind the psychological injury resulting from moral incongruence in the context of the pandemic. The pandemic most frequently threatened basic human motivations at the foundational level of safety and security relative to other higher order needs.

From the General Discussion

A subset of respondents added context to their experiences of moral injury in the form of narrative responses. These powerful accounts of the lived experiences of health-care workers provided us with a richer understanding of the construct of moral injury, especially as it relates to the novel context of the pandemic. Although betrayal is a known facet of moral injury from prior work (Bryan et al., 2016), our qualitative analysis suggests that dehumanization may also be a key phenomenon that underlies pandemic-related moral injury. Given our findings, we suggest that it may be important to attend to both betrayal and dehumanization when researching or intervening on the psychological sequelae of the pandemic. Our results support this because experiences of dehumanization in our sample were associated with greater symptoms of traumatic stress.

Another lens through which to view the experiences of health-care workers in the pandemic is through unsatisfied basic human motivations. Given the obvious barriers the pandemic presents to human connection (Hagerty & Williams, 2020), we had an a priori interest in studying loneliness. Our results indeed suggest that need of social connection appears relevant to the mental-health experiences of health-care workers during the pandemic such that loneliness was associated with greater traumatic stress, moral injury, and suicidal ideation. Echoing the importance of this social factor are findings from prior research suggesting that social connectedness buffers the association between moral injury and suicidality (Kelley et al., 2019) and buffers the impact of PTSD symptoms on suicidal behavior (Panagioti et al., 2014). Thus, our work further highlights lack of social connection as possible risk factor among individuals who face moral injury and traumatic stress and demonstrates its relevance to the mental health of health-care workers during the pandemic.

Tuesday, March 22, 2022

Could we fall in love with robots?

Rich Wordsworth
eandt.theiet.org
Originally published 6 DEC 21

Here is an excerpt:

“So what are people’s expectations? They’re being fed a very particular idea of how [robot companions] should look. But when you start saying to people, ‘They can look like anything,’ then the imagination really opens up.”

Perhaps designing companion robots that deliberately don’t emulate human beings is the answer to that common sci-fi question of whether or not a relationship with a robot can ever be reciprocal. A robot with a Kindle for a head isn’t likely to hoodwink many people at the singles bar. When science fiction shows us robotic lovers, they are overwhelmingly portrayed as human (at least outwardly). This trips something defensive in us: the sense of unease or revulsion we feel when a non-human entity tries to deceive us into thinking that it’s human is such a common phenomenon (thanks largely to CGI in films and video games) that it has its own name: ‘the Uncanny Valley’. Perhaps in the future, the engineering of humanoid robots will progress to the point where we really can’t tell (without a signed waiver and a toolbox) whether a ‘person’ is flesh and blood or wires and circuitry. But in the meantime, maybe the best answer is simply not to bother attempting to emulate humans and explore the outlandish.

“You can form a friendship; you can form a bond,” says Devlin of non-humanlike machines. “That bond is one-way, but if the machine shows you any form of response, then you can project onto that and feel social. We treat machines socially because we are social creatures and it’s almost enough to make us buy into it. Not delusionally, but to suspend our disbelief and feel a connection. People feel connections with their vacuum cleaners: mine’s called Babbage and I watch him scurrying around, I pick him up, I tell him, ‘Don’t go there!’ It’s like having a robot pet – but I’m perfectly aware he’s just a lump of plastic. People talk to their Alexas when they’re lonely and they want to chat. So, yes: you can feel a bond there.

“It’s not the same as a human friendship: it’s a new social category that’s emerging that we haven’t really seen before.”

As for the question of reciprocity, Devlin doesn’t see a barrier there with robots that doesn’t already exist in human relationships.

“You’ll get a lot of people going, ‘Oh, that’s not true friendship; that’s not real.’,” Devlin says, sneeringly. “Well, if it feels real and if you’re happy in it, is that a problem? It’s the same people who say you can’t have true love unless it’s reciprocated, which is the biggest lie I’ve ever heard because there are so many people out there who are falling in love with people they’ve never even met! Fictional people! Film stars! Everybody! Those feelings are very, very valid to someone who’s experiencing them.”

“How are you guys doing here?” The waitress asks with perfect waitress-in-a-movie timing as Twombly and Catherine sit, processing the former’s new relationship with Samantha in silence.

“Fine,” Catherine blurts. “We’re fine. We used to be married but he couldn’t handle me; he wanted to put me on Prozac and now he’s madly in love with his laptop.”

In 2013, Spike Jonze’s script for ‘Her’ won the Academy Award for Best Screenplay (it was nominated for four others including Best Picture). A year later, Alex Garland’s script for ‘Ex Machina’ would be nominated for the same award while arguably presenting the same conclusion: we are a species that loves openly and to a fault. 

Monday, March 21, 2022

Confidence and gradation in causal judgment

O'Neill, K., Henne, P, et al.
Cognition
Volume 223, June 2022, 105036

Abstract

When comparing the roles of the lightning strike and the dry climate in causing the forest fire, one might think that the lightning strike is more of a cause than the dry climate, or one might think that the lightning strike completely caused the fire while the dry conditions did not cause it at all. Psychologists and philosophers have long debated whether such causal judgments are graded; that is, whether people treat some causes as stronger than others. To address this debate, we first reanalyzed data from four recent studies. We found that causal judgments were actually multimodal: although most causal judgments made on a continuous scale were categorical, there was also some gradation. We then tested two competing explanations for this gradation: the confidence explanation, which states that people make graded causal judgments because they have varying degrees of belief in causal relations, and the strength explanation, which states that people make graded causal judgments because they believe that causation itself is graded. Experiment 1 tested the confidence explanation and showed that gradation in causal judgments was indeed moderated by confidence: people tended to make graded causal judgments when they were unconfident, but they tended to make more categorical causal judgments when they were confident. Experiment 2 tested the causal strength explanation and showed that although confidence still explained variation in causal judgments, it did not explain away the effects of normality, causal structure, or the number of candidate causes. Overall, we found that causal judgments were multimodal and that people make graded judgments both when they think a cause is weak and when they are uncertain about its causal role.

From the General Discussion

The current paper sought to address two major questions regarding singular causal judgments: are causal judgments graded, and if so, what explains this gradation?

(cut)

In other words, people make graded causal judgments both when they think a cause is weak and also when they are uncertain about their causal judgment. Although work is needed to determine precisely why and when causal judgments are influenced by confidence, we have demonstrated that these effects are separable from more well-studied effects on causal judgment. This is good news for theories of causal judgment that rely on the causal strength explanation: these theories do not need to account for the effects of confidence on causal judgment to be useful in explaining other effects. That is, there is no need for major revisions in how we think about causal judgments. Nevertheless, we think our results have important implications for these theories, which we outline below.

Sunday, March 20, 2022

The prejudices of expert evidence

Chin, J., Cullen, H. J., & Clarke, B. 
(2022, February 14).
https://doi.org/10.31222/osf.io/nxcvy

Abstract

The rules and procedures regulating the admission of potentially unreliable expert evidence have been substantially weakened over the past several years. We respond to this trend by focusing on one aspect of the rules that has not been explicitly curtailed: unfair prejudice. Unfair prejudice is an important component of trial judges’ authority to exclude evidence, which they may do when that unfair prejudice outweighs the evidence’s probative value. We develop the concept of unfair prejudice by first examining how it has been interpreted by judges and then relating that to the relevant social scientific research on the characteristics of expertise that can make it prejudicial. In doing so, we also discuss the research behind a common reason that judges admit expert evidence despite its prejudice, which is that judicial directions help jurors understand and weigh it. As a result, this article provides two main contributions. First, it advances knowledge about unfair prejudice, which is an important part of expert evidence law that has received relatively little attention from legal researchers. Second, it provides guidance to practitioners for challenging expert evidence under one of the few avenues left to do so.

(cut)

What should courts do about the prejudices of expert evidence?

While we recognise that balancing probative value with unfair prejudice is fact-specific and contextual, the analysis above suggests considerable room for improvement in how courts assess putatively prejudicial expert evidence. Specifically, the research we reviewed indicates that courts do not fully appreciate the degree to which laypeople may overestimate the reliability of scientific claims. But, more than that, the judicial approach has been myopically focused on the CSI Effect (and in at least one case, significantly misconstrued it), rather than other well-researched expert evidence stereotypes and misconceptions. Accordingly, we recommend that judges apply the discretions to exclude evidence in sections 135 and 137 of the UEL in a way that is more sensitive to empirical research. For example, courts should recognise that experts, or counsel that emphasise the expert’s status and years of experience, also feed into that evidence’s prejudicial potential. Moreover, technical jargon and the general complexity of the evidence can serve to heighten that prejudice, such that these features of expert evidence may build upon each other in a way that is more than additive.

The expert evidence jurisprudence is even more insensitive to research on the factors that make evidence difficult or impossible to test. For example, we struggled (as others have) to find decisions acknowledging that unconscious cognitive processes and associated biases invite prejudice because the unconscious is difficult to cross-examine. Moreover, the closest decision we could find acknowledging adversarial imbalance as a limit on adversarial testing was a US decision in obiter.  And troublingly, courts sometimes simply mistake previously admitted evidence with evidence that has been adversarially tested. With evidence that defies testing, the first step for courts is to acknowledge this research on prejudice and incorporate it into the exclusionary calculus in sections 135 and 137. The next step, as we will see in the following part, is to use this knowledge to better understand the limitations of judicial directions aimed at mitigating prejudice – and perhaps craft better directions in the future.