Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, December 9, 2022

Neural and Cognitive Signatures of Guilt Predict Hypocritical Blame

Yu, H., Contreras-Huerta, L. S., et al. (2022).
Psychological Science, 0(0).

Abstract

A common form of moral hypocrisy occurs when people blame others for moral violations that they themselves commit. It is assumed that hypocritical blamers act in this manner to falsely signal that they hold moral standards that they do not really accept. We tested this assumption by investigating the neurocognitive processes of hypocritical blamers during moral decision-making. Participants (62 adult UK residents; 27 males) underwent functional MRI scanning while deciding whether to profit by inflicting pain on others and then judged the blameworthiness of others’ identical decisions. Observers (188 adult U.S. residents; 125 males) judged participants who blamed others for making the same harmful choice to be hypocritical, immoral, and untrustworthy. However, analyzing hypocritical blamers’ behaviors and neural responses shows that hypocritical blame was positively correlated with conflicted feelings, neural responses to moral standards, and guilt-related neural responses. These findings demonstrate that hypocritical blamers may hold the moral standards that they apply to others.

Statement of Relevance

Hypocrites blame other people for moral violations they themselves have committed. Common perceptions of hypocrites assume they are disingenuous and insincere. However, the mental states and neurocognitive processes underlying hypocritical blamers’ behaviors are not well understood. We showed that people who hypocritically blamed others reported stronger feelings of moral conflict during moral decision-making, had stronger neural responses to moral standards in lateral prefrontal cortex, and exhibited more guilt-related neurocognitive processes associated with harming others. These findings suggest that some hypocritical blamers do care about the moral standards they use to condemn other people but sometimes fail to live up to those standards themselves, contrary to the common philosophical and folk perception.

Discussion

In this study, we developed a laboratory paradigm to precisely quantify hypocritical blame, in which people blame others for committing the same transgressions they committed themselves (Todd, 2019). At the core of this operationalization of hypocrisy is a discrepancy between participants’ moral judgments and their behaviors in a moral decision-making task. Therefore, we measured participants’ choices in an incentivized moral decision-making task that they believed had real impact on their own monetary payoff and painful electric shocks delivered to a receiver. We then compared those choices with moral judgments they made a week later of other people in the same choice context. By comparing participants’ judgments with their own behaviors, we were able to quantify the degree to which they judge other people more harshly for making the same choices they themselves made previously (i.e., hypocritical blame).

Thursday, December 8, 2022

‘A lottery ticket, not a guarantee’: fertility experts on the rise of egg freezing

Hannah Devlin
The Guardian
Originally posted 11 NOV 22

Here is an excerpt:

This means that a woman who freezes eggs at the age of 30 boosts her chances of successful IVF at 40 years. But, according to Dr Zeynep Gurtin, a lecturer in women’s health at UCL, this concept has led to a false narrative that if you freeze your eggs “you’ll be fine”. “A lot of people who freeze their eggs don’t get pregnant,” Gurtin said.

First, only a fraction opt to use the eggs down the line – some get pregnant without IVF, others decide not to for a range of reasons. For those who go ahead, HFEA figures show that, as an average across all age groups, just 2% of all thawed eggs ended up as pregnancies and 0.7% resulted in live births in 2018. For each IVF cycle, this gives a 27% chance on average of a birth for those who froze their eggs before the age of 35 and a 13% for those who froze their eggs after this age. The most common age for egg freezing in the UK is 38 years old.

A recent analysis by the Nuffield Council on Bioethics found women often felt frustrated at having received insufficient information on success rates, but also reported feeling relief and a sense of empowerment.

Egg freezing, Gurtin suggested, should be viewed as “having a lottery ticket rather than having an insurance policy”.

“An insurance policy suggests you’ll definitely get a payout,” she said. “You’re just increasing your chances.”

As lottery tickets go, it is an expensive one. The average cost of having eggs collected and frozen is £3,350, with additional £500-£1,500 costs for medication and an ongoing expense of £125-£350 a year for storage. And clinics are not always upfront about the full extent of costs.

“In many cases, you’re going to spend a third more than the advertised price – and you’re spending that money for something that’s not an immediate benefit to you,” said Gurtin. “It’s a big gamble.”

“When people talk about egg freezing revolutionising women’s lives, you have to ask: how many can afford it?” she added.

Travelling abroad, where treatments may be cheaper, is an option but can be logistically problematic. “When it comes to repatriating eggs, sperm and embryos, it is possible, but it’s not always that straightforward,” said Sarris. “You need to follow a process, you don’t just send them with DHL.”

Wednesday, December 7, 2022

Corrupt third parties undermine trust and prosocial behaviour between people.

Spadaro, G., Molho, C., Van Prooijen, JW. et al.
Nat Hum Behav (2022).

Abstract

Corruption is a pervasive phenomenon that affects the quality of institutions, undermines economic growth and exacerbates inequalities around the globe. Here we tested whether perceiving representatives of institutions as corrupt undermines trust and subsequent prosocial behaviour among strangers. We developed an experimental game paradigm modelling representatives as third-party punishers to manipulate or assess corruption and examine its relationship with trust and prosociality (trust behaviour, cooperation and generosity). In a sequential dyadic die-rolling task, the participants observed the dishonest behaviour of a target who would subsequently serve as a third-party punisher in a trust game (Study 1a, N = 540), in a prisoner’s dilemma (Study 1b, N = 503) and in dictator games (Studies 2–4, N = 765, pre-registered). Across these five studies, perceiving a third party as corrupt undermined interpersonal trust and, in turn, prosocial behaviour. These findings contribute to our understanding of the critical role that representatives of institutions play in shaping cooperative relationships in modern societies.

Discussion

Considerable research in various scientific disciplines has addressed the intricate associations between the degree to which institutions are corrupt and the extent to which people trust one another and build cooperative relations. One perspective suggests that the success of institutions is rooted in interpersonal processes such as trust. Another perspective assumes a top-down process, suggesting that the functioning of institutions serves as a basis to promote and sustain interpersonal trust. However, as far as we know, this latter claim has not been tested in experimental settings.

In the present research, we provided an initial test of a top-down perspective, examining the role of a corrupt versus honest institutional representative, here operationalized as a third-party observer with the power to regulate interaction through punishment. To do so, we revisited the sequential dyadic die-rolling paradigm where the participants could learn whether the third party was corrupt or not via second-hand
learning or via first-hand experience. Across five studies (N = 1,808), we found support for the central hypothesis guiding this research: perceiving third parties as corrupt is associated with a decline in interpersonal trust, and subsequent prosocial behaviour, towards strangers. This result was robust across a broad set of economic games and designs.

Tuesday, December 6, 2022

Countering cognitive biases on experts’ objectivity in court

Kathryn A. LaFortune
Monitor on Psychology
Vol. 53 No. 6
Print version: page 47

Mental health professionals’ opinions can be extremely influential in legal proceedings. Yet, current research is inconclusive about the effects of various cognitive biases on experts’ objectivity when making forensic mental health judgments and which biases most influence these decisions, according to a 2022 study in Law and Human Behavior by psychologists Tess Neal, Pascal Lienert, Emily Denne, and Jay Singh (Vol. 46, No. 2, 2022). The study also pointed to the need for more research on which debiasing strategies effectively counter bias in forensic mental health decisions and whether there should be specific policies and procedures to address these unique aspects of forensic work in mental health.

In the study, researchers conducted a systematic review of the relevant literature in forensic mental health decision-making. “Bias” was not generally defined in most of the available studies reviewed in the context of researching forensic mental health judgments. Their study noted that only a few forms of bias have been explored as they pertain specifically to forensic mental health professionals’ opinions. Adversarial allegiance, confirmation bias, hindsight bias, and bias blind spot have not been rigorously studied for potential negative effects on forensic mental health expert opinions across different contexts.

The importance of addressing these concerns is heightened when considering APA’s Ethics Code provisions that require psychologists to decline a professional role if bias may diminish their objectivity (See, Ethical Principles of Psychologists and Code of Conduct, Section 3.06). Similarly, the Specialty Guidelines for Forensic Psychologists advises forensic practitioners to decline participation in cases when potential biases may impact their impartiality or to take steps to correct or limit the effects of the bias (Section 2.07). That said, unlike in other professions where tasks are often repetitive, decision-making in the field of forensic psychology is impacted by the unique nature of the various referrals that forensic psychologists receive, making it even more difficult to expect them to consider and correct how their culture, attitudes, values, beliefs, and biases might affect their work. They engage in greater subjectivity in selecting assessment tools from a large array of available tests, none of which are uniformly adopted in cases, in part because of the wide range of questions experts often must answer to assist the court and the current lack of standardized methods. Neither do experts typically receive immediate feedback on their opinions. This study also noted that the only debiasing strategy shown to be effective for forensic psychologists was to “consider the opposite,” in which experts ask themselves why their opinions might be wrong and what alternatives they may have considered.

Monday, December 5, 2022

Social isolation and the brain in the pandemic era

Bzdok, D., and Dunbar, R.
Nat Hum Behav 6, 1333–1343 (2022).
https://doi.org/10.1038/s41562-022-01453-0

Abstract

Intense sociality has been a catalyst for human culture and civilization, and our social relationships at a personal level play a pivotal role in our health and well-being. These relationships are, however, sensitive to the time we invest in them. To understand how and why this should be, we first outline the evolutionary background in primate sociality from which our human social world has emerged. We then review defining features of that human sociality, putting forward a framework within which one can understand the consequences of mass social isolation during the COVID-19 pandemic, including mental health deterioration, stress, sleep disturbance and substance misuse. We outline recent research on the neural basis of prolonged social isolation, highlighting especially higher-order neural circuits such as the default mode network. Our survey of studies covers the negative effects of prolonged social deprivation and the multifaceted drivers of day-to-day pandemic experiences.

Conclusion

The human social world is deeply rooted in our primate ancestry. This social world is, however, extremely sensitive to the time we invest in it. Enforced social isolation can easily destabilize its delicate equilibrium. Many of the psychological sequalae of COVID-19 lockdowns are readily understood as resulting from the dislocation of these deeply rooted social processes. Indeed, many of these findings could have been anticipated long before the COVID-19 pandemic. For example, almost one in ten Europeans admitted never meeting friends or family outside of their own household in the course of an entire year, with direct consequences for their psychological and physical health. Solitary living made up >50% of households in a growing number of metropolitan cities worldwide and has long been thought to be the cause of increasing levels of depression and psychological dystopia. Indeed, aversive feelings of social isolation probably serve as a biological warning signal that alerts individuals to improve their social relationships.

Three key points emerge from our present assessment. One is that COVID-19 and associated public health restrictions to curb the spread of the virus are likely to have demonstrable mental health and psychosocial ramifications for years to come. This will inevitably place a significant burden on our health systems and societies. The impact may, however, be largely restricted to specific population strata. Older people, for example, are likely to face disproportionately adverse consequences. Worryingly, prolonged social isolation seems to invoke changes in the capacity to visualize internally centred thoughts, especially in younger sub-population. This may presage a switch from an outward to an inward focus that may exacerbate the experience of social isolation in susceptible individuals. The longer-term implications of this are, however, yet to be determined. Second, the experience of undergoing social isolation is known to have significant effects on the structure and function of the hippocampus and default network, long recognized as a primary neural pathway implicated in the pathophysiology of dementia and other major neurodegenerative diseases as well as in effective social function. The fact that these same brain regions turn up in the neuroanatomical consequences of COVID-19 infection is concerning. Our third key point is that social determinants that condition inequality in our societies have strong impacts on lived day-to-day pandemic experiences. This is highlighted by the negative outcomes from COVID-19 for families of lower socio-economic status, single-parent households, and those with racial and ethnic minority backgrounds.

As a note of caution, in our judgement, few datasets or methodological tools exist today to definitively establish causal directionality in many of the population effects we have surveyed in this review. For example, many of the correlative links do not allow us to infer whether loneliness directly causes depression and anxiety, as opposed to already depressed, anxious individuals being more prone to developing loneliness in times of adversity. Similarly, none of the reviewed findings can be used to tease apart whether changes in psychopathology during periods of mass social isolation are the chicken or the egg of the many biological manifestations. To fill knowledge gaps on mediating mechanisms for theoretical models, future research requires carefully designed and controlled longitudinal before-versus-after COVID-19 population investigations.

Sunday, December 4, 2022

Risk of Suicide After Dementia Diagnosis

Alothman D, Card T, et al.
JAMA Neurology
Published online October 03, 2022.

Abstract

Importance  Patients with dementia may be at an increased suicide risk. Identifying groups at greatest risk of suicide would support targeted risk reduction efforts by clinical dementia services.

Objectives  To examine the association between a dementia diagnosis and suicide risk in the general population and to identify high-risk subgroups.

Design, Setting, and Participants  This was a population-based case-control study in England conducted from January 1, 2001, through December 31, 2019. Data were obtained from multiple linked electronic records from primary care, secondary care, and the Office for National Statistics. Included participants were all patients 15 years or older and registered in the Office for National Statistics in England with a death coded as suicide or open verdict from 2001 to 2019. Up to 40 live control participants per suicide case were randomly matched on primary care practice and suicide date.

Exposures  Patients with codes referring to a dementia diagnosis were identified in primary care and secondary care databases.

Main Outcomes and Measures  Odds ratios (ORs) were estimated using conditional logistic regression and adjusted for sex and age at suicide/index date.

Conclusions and Relevance  Diagnostic and management services for dementia, in both primary and secondary care settings, should target suicide risk assessment to the identified high-risk groups.


Key Points

Question  Is there an association between dementia diagnosis and a higher risk of suicide?

Findings  In this nationally representative case-control study including 594, 674 persons in England from 2001 through 2019, dementia was found to be associated with increased risk of suicide in specific patient subgroups: those diagnosed before age 65 years (particularly in the 3-month postdiagnostic period), those in the first 3 months after diagnosis, and those with known psychiatric comorbidities.

Meaning  Given the current efforts to improve rates of dementia diagnosis, these findings emphasize the importance of concurrent implementation of suicide risk assessment for the identified high-risk groups.

Saturday, December 3, 2022

Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence

Nussberger, A. M., Luo, L., Celis, L. E., 
& Crockett, M. J. (2022). 
Nature communications, 13(1), 5821.

Abstract

As Artificial Intelligence (AI) proliferates across important social institutions, many of the most powerful AI systems available are difficult to interpret for end-users and engineers alike. Here, we sought to characterize public attitudes towards AI interpretability. Across seven studies (N = 2475), we demonstrate robust and positive attitudes towards interpretable AI among non-experts that generalize across a variety of real-world applications and follow predictable patterns. Participants value interpretability positively across different levels of AI autonomy and accuracy, and rate interpretability as more important for AI decisions involving high stakes and scarce resources. Crucially, when AI interpretability trades off against AI accuracy, participants prioritize accuracy over interpretability under the same conditions driving positive attitudes towards interpretability in the first place: amidst high stakes and scarce resources. These attitudes could drive a proliferation of AI systems making high-impact ethical decisions that are difficult to explain and understand.


Discussion

In recent years, academics, policymakers, and developers have debated whether interpretability is a fundamental prerequisite for trust in AI systems. However, it remains unknown whether non-experts–who may ultimately comprise a significant portion of end-users for AI applications–actually care about AI interpretability, and if so, under what conditions. Here, we characterise public attitudes towards interpretability in AI across seven studies. Our data demonstrates that people consider interpretability in AI to be important. Even though these positive attitudes generalise across a host of AI applications and show systematic patterns of variation, they also seem to be capricious. While people valued interpretability as similarly important for AI systems that directly implemented decisions and AI systems recommending a course of action to a human (Study 1A), they valued interpretability more for applications involving higher (relative to lower) stakes and for applications determining access to scarce (relative to abundant) resources (Studies 1A-C, Study 2). And while participants valued AI interpretability across all levels of AI accuracy when considering the two attributes independently (Study 3A), they sacrificed interpretability for accuracy when these two attributes traded off against one another (Studies 3B–C). Furthermore, participants favoured accuracy over interpretability under the same conditions that drove importance ratings of interpretability in the first place: when stakes are high and resources are scarce.

Our findings highlight that high-stakes applications, such as medical diagnosis, will generally be met with enhanced requirements towards AI interpretability. Notably, this sensitivity to stakes parallels magnitude-sensitivity as a foundational process in the cognitive appraisal of outcomes. The impact of stakes on attitudes towards interpretability were apparent not only in our experiments that manipulated stakes within a given AI-application, but also in absolute and relative levels of participants’ valuation of interpretability across applications–take, for instance, ‘hurricane first aid’ and ‘vaccine allocation’ outperforming ‘hiring decisions’, ‘insurance pricing’, and ‘standby seat prioritizing’. Conceivably, this ordering would also emerge if we ranked the applications according to the scope of auditing- and control-measures imposed on human executives, reflecting interpretability’s essential capacity of verifying appropriate and fair decision processes.

Friday, December 2, 2022

Rational use of cognitive resources in human planning

Callaway, F., van Opheusden, B., Gul, S. et al. 
Nat Hum Behav 6, 1112–1125 (2022).
https://doi.org/10.1038/s41562-022-01332-8

Abstract

Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near optimal under some circumstances but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.

Discussion

In this paper, we proposed a rational model of resource-constrained planning and compared the predictions of the model to human behaviour in a process-tracing paradigm. Our results suggest that human planning strategies are highly adaptive in ways that previous models cannot capture. In Experiment 1, we found that the optimal planning strategy in a generic environment resembled best-first search with a relative stopping rule. Participant behaviour was also consistent with such a strategy. However, the optimal planning strategy depends on the structure of the environment. Thus, in Experiments 2 and 3, we constructed six environments in which the optimal strategy resembled different classical search algorithms (best-first, breadth-first, depth-first and backward search). In each case, participant behaviour matched the environment-appropriate algorithm, as the optimal model predicted.

The idea that people use heuristics that are jointly adapted to environmental structure and computational limitations is not new. First popularized by Herbert Simon, it has more recently been championed in ecological rationality, which generally takes the approach of identifying computationally frugal heuristics that make accurate choices in certain environments. However, while ecological rationality explicitly rejects the notion of optimality, our approach embraces it, identifying heuristics that maximize an objective function that includes both external utility and internal cognitive cost. Supporting our approach, we found that the optimal model explained human planning behaviour better than flexible combinations of previously proposed planning heuristics in seven of the eight environments we considered (Supplementary Table 1).

Thursday, December 1, 2022

Can Risk Aversion Survive the Long Run?

Hayden Wilkinson
The Philosophical Quarterly, 2022

Abstract

Can it be rational to be risk-averse? It seems plausible that the answer is yes—that normative decision theory should accommodate risk aversion. But there is a seemingly compelling class of arguments against our most promising methods of doing so. These long-run arguments point out that, in practice, each decision an agent makes is just one in a very long sequence of such decisions. Given this form of dynamic choice situation and the (Strong) Law of Large Numbers, they conclude that those theories which accommodate risk aversion end up delivering the same verdicts as risk-neutral theories in nearly all practical cases. If so, why not just accept a simpler, risk-neutral theory? The resulting practical verdicts seem to be much the same. In this paper, I show that these arguments do not in fact condemn those risk-aversion-accommodating theories. Risk aversion can indeed survive in the long run.

Conclusion

Where does this leave our most promising risk-aversion-accommodating theories? Do they offer no true alternative to expected value theory, as long-run arguments have suggested?

Such theories do offer an alternative. As shown here, they disagree with expected value theory in various classes of cases: Whenever agents compare options with equal expected value and which one is riskier; when any given pair of options are compared by agents who are simply risk-averse enough; when even agents who are only moderately risk-averse face decisions with particularly high stakes; and, if we reject resolute choice, then also when an agent compares options available to them near the end of their life. And these will include moral decisions on which it is especially important for our theory to get right: high-stakes decisions such as, perhaps, a political leader deciding whether to start a war or an elderly philanthropist deciding where to bequeath their riches. It will not do to simply follow the verdicts of expected value theory in these cases if instead some risk-aversion-accommodating theory is true—disagreement even just in these cases is enough to warrant retaining such theories.

Admittedly, it is true that these theories agree with expected value theory in many cases. We now have formal proof of that. Take any decision between two options (with unequal expected value). If an agent has a suitable risk attitude and faces sufficiently many other decisions in their life, then REU theory and EU theory will agree with expected value theory on the verdict.