Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, August 29, 2022

Debiasing System 1: Training favours logical over stereotypical intuiting

Boissin, E, Caparos, S., Voudouri, A, & DeNeys, W.
Judgment and Decision Making, Vol. 17, No. 4, 
July 2022, pp. 646–690

Abstract

Whereas people’s reasoning is often biased by intuitive stereotypical associations, recent debiasing studies suggest that performance can be boosted by short training interventions that stress the underlying problem logic. The nature of this training effect remains unclear. Does training help participants correct erroneous stereotypical intuitions through deliberation? Or does it help them develop correct intuitions? We addressed this issue in four studies with base-rate neglect and conjunction fallacy problems. We used a two-response paradigm in which participants first gave an initial intuitive response, under time pressure and cognitive load, and then gave a final response after deliberation. Studies 1A and 2A showed that training boosted performance and did so as early as the intuitive stage. After training, most participants
solved the problems correctly from the outset and no longer needed to correct an initial incorrect answer through deliberation. Studies 1B and 2B indicated that this sound intuiting persisted over at least two months. The findings confirm that a short training can debias reasoning at an intuitive “System 1” stage and get reasoners to favour logical over stereotypical intuitions.

From the General Discussion

Traditionally, it is assumed in the literature that debiasing interventions work by boosting deliberation and get people to better correct erroneous intuitions (Lilienfeld et al., 2009; Milkman et al., 2009). However, in many daily life situations reasoners will simply not have the time (or resources) to engage in costly deliberation. Hence, if our interventionsonly taught participants to deliberate more, they would be less than optimal (Boissin et al., 2021). As with most educational settings, we ultimately do not only want people to correct erroneous intuitions but to avoid biased intuitions altogether (Evans, 2019; Milkman et al., 2009; Reyna et al., 2015; Stanovich, 2018). The present study indicates that debiasing interventions in which the problem logic is briefly explained have such potential.

To avoid misinterpretation, it is important to highlight that our training did not lead to transfer effects. The training should thus not be conceived as a panacea that magically tunes the whole System 1 in one single stop. The training results generalized to base-rate and conjunction tasks, with overall similar effects across the two types of tasks, showing that participants can be trained to intuit correctly with different types of reasoning problems.  However, training base-rates did not help to solve the conjunction fallacy or other unrelated problems, and vice versa. The training effects were task specific. Reasoners did not learn to intuit (or deliberate) better in general. They got better at the very specific problem they were trained at. This fits with the finding that existing debiasing or cognitive training programs are often task or domain specific (Lilienfeld et al., 2009; Sala & Gobet, 2019; but also see Morewedge et al., 2015; Trouche et al., 2014). Our key finding is that this task specific training can play at the intuitive level and is persistent. When we talk about “System 1 debiasing” it should be conceived at this task specific level.

Sunday, August 28, 2022

Dr. Oz Shouldn’t Be a Senator—or a Doctor

Timothy Caulfield
Scientific American
Originally posted 15 DEC 21

While holding a medical license, Mehmet Oz, widely known as Dr. Oz, has long pushed misleading, science-free and unproven alternative therapies such as homeopathy, as well as fad diets, detoxes and cleanses. Some of these things have been potentially harmful, including hydroxychloroquine, which he once touted would be beneficial in the treatment or prevention of COVID. This assertion has been thoroughly debunked.

He’s built a tremendous following around his lucrative but evidence-free advice. So, are we surprised that Oz is running as a Republican for the U.S. Senate in Pennsylvania? No, we are not. Misinformation-spouting celebrities seem to be a GOP favorite. This move is very on brand for both Oz and the Republican Party.

His candidacy is a reminder that tolerating and/or enabling celebrity pseudoscience (I’m thinking of you, Oprah Winfrey!) can have serious and enduring consequences. Much of Oz’s advice was bunk before the pandemic, it is bunk now, and there is no reason to assume it won’t be bunk after—even if he becomes Senator Oz. Indeed, as Senator Oz, it’s all but guaranteed he would bring pseudoscience to the table when crafting and voting on legislation that affects the health and welfare of Americans.

As viewed by someone who researches the spread of health misinformation, Oz’s candidacy remains deeply grating in that “of course he is” kind of way. But it is also an opportunity to highlight several realities about pseudoscience, celebrity physicians and the current regulatory environment that allows people like him to continue to call themselves doctor.

Before the pandemic I often heard people argue that the wellness woo coming from celebrities like Gwyneth Paltrow, Tom Brady and Oz was mostly harmless noise. If people want to waste their money on ridiculous vagina eggs, bogus diets or unproven alternative remedies, why should we care? Buyer beware, a fool and their money, a sucker is born every minute, etc., etc.

But we know, now more than ever, that pop culture can—for better or worse—have a significant impact on health beliefs and behaviors. Indeed, one need only consider the degree to which Jenny McCarthy gave life to the vile claim that autism is linked to vaccination. Celebrity figures like podcast host Joe Rogan and football player Aaron Rodgers have greatly added to the chaotic information regarding COVID-19 by magnifying unsupported claims.

Saturday, August 27, 2022

Counterfactuals and the logic of causal selection

Quillien, T., & Lucas, C. G. (2022, June 13)
https://doi.org/10.31234/osf.io/ts76y

Abstract

Everything that happens has a multitude of causes, but people make causal judgments effortlessly. How do people select one particular cause (e.g. the lightning bolt that set the forest ablaze) out of the set of factors that contributed to the event (the oxygen in the air, the dry weather. . . )? Cognitive scientists have suggested that people make causal judgments about an event by simulating alternative ways things could have happened. We argue that this counterfactual theory explains many features of human causal intuitions, given two simple assumptions. First, people tend to imagine counterfactual possibilities that are both a priori likely and similar to what actually happened. Second, people judge that a factor C caused effect E if C and E are highly correlated across these counterfactual possibilities. In a reanalysis of existing empirical data, and a set of new experiments, we find that this theory uniquely accounts for people’s causal intuitions.

From the General Discussion

Judgments of causation are closely related to assignments of blame, praise, and moral responsibility.  For instance, when two cars crash at an intersection, we say that the accident was caused by the driver who went through a red light (not by the driver who went through a green light; Knobe and Fraser, 2008; Icard et al., 2017; Hitchcock and Knobe, 2009; Roxborough and Cumby, 2009; Alicke, 1992; Willemsen and Kirfel, 2019); and we also blame that driver for the accident. According to some theorists, the fact that we judge the norm-violator to be blameworthy or morally responsible explains why we judge that he was the cause of the accident. This might be because our motivation to blame distorts our causal judgment (Alicke et al., 2011), because our intuitive concept of causation is inherently normative (Sytsma, 2021), or because of pragmatics confounds in the experimental tasks that probe the effect of moral violations on causal judgment (Samland & Waldmann, 2016).

Under these accounts, the explanation for why moral considerations affect causal judgment should be completely different than the explanation for why other factors (e.g.,prior probabilities, what happened in the actual world, the causal structure of the situation) affect causal judgment. We favor a more parsimonious account: the counterfactual approach to causal judgment (of which our theory is one instantiation) provides a unifying explanation for the influence of both moral and non-moral considerations on causal judgment (Hitchcock & Knobe, 2009)16.

Finally, many formal theories of causal reasoning aim to model how people make causal inferences (e.g. Cheng, 1997; Griffiths & Tenenbaum, 2005; Lucas & Griffiths, 2010; Bramley et al., 2017; Jenkins & Ward, 1965). These theories are not concerned with the problem of causal selection, the focus of the present paper. It is in principle possible that people use the same algorithms they use for causal inference when they engage in causal selection, but in practice models of causal inference have not been able to predict how people select causes (see Quillien and Barlev, 2022; Morris et al., 2019).

Friday, August 26, 2022

The Selective Laziness of Reasoning

Trouche, E., Johansson, P., Hall, L., & Mercier, H. 
(2016). Cognitive science, 40(8), 2122–2136.
https://doi.org/10.1111/cogs.12303

Abstract

Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this "selective laziness," we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own.

From the Discussion

These experiments provide a very clear demonstration of the selective laziness of reasoning. When reasoning produces arguments, it mostly produces post-hoc justifications for intuitive answers, and it is not particularly critical of one’s arguments for invalid answers. By contrast, when reasoning evaluates the very same arguments as if they were someone else’s, it proves both critical and discriminating.

The present results are analogous to those observed in the belief bias literature (e.g., Evans et al., 1983). When participants evaluate an argument whose conclusion they agree with, they tend to be neither critical (they accept most arguments) nor discriminating(they are not much more likely to reject invalid than valid arguments). By contrast, when they evaluate argument whose conclusion they disagree with, they tend to be more critical (they reject more arguments) and more discriminating (they are much more likely to reject invalid than valid arguments). The similarity is easily explained by the fact that when reasoning produces arguments for one’s position, it is automatically in a situation in which it agrees with the argument’s conclusion.

Selective laziness can be interpreted in light of the argumentative theory of reasoning (Mercier & Sperber, 2011). This theory hypothesizes that reasoning is best employed in a dialogical context. In such contexts, opening a discussion with a relatively weak argument is often sensible: It saves the trouble of computing the best way to convince a specific audience, and if the argument proves unconvincing, its flaws can be addressed in the back and forth of argumentation. Indeed, the interlocutor typically provides counter-arguments that help the speaker refine her arguments inappropriate ways (for an extended argument, see Mercier, Bonnier, & Trouche, unpublished data). As a result, the laziness of argument production might not be a flaw but an adaptive feature of reasoning. By contrast, people should properly evaluate other people’s arguments, so as not to accept misleading information—hence the selectivity of reasoning’s laziness.


In short: We make better judges for others, and better defense attorneys for ourselves (paraphrasing an old saying).

Thursday, August 25, 2022

South Dakota Governor Kristi Noem may have "engaged in misconduct," ethics board says

CBS News
Originally posted 23 AUG 22

A South Dakota ethics board on Monday said it found sufficient information that Gov. Kristi Noem may have "engaged in misconduct" when she intervened in her daughter's application for a real estate appraiser license, and it referred a separate complaint over her state airplane use to the state's attorney general for investigation.

The three retired judges on the Government Accountability Board determined that "appropriate action" could be taken against Noem for her role in her daughter's appraiser licensure, though it didn't specify the action.

The board's moves potentially escalate the ramifications of investigations into Noem. The Republican governor faces reelection this year and has also positioned herself as an aspirant to the White House in 2024. She is under scrutiny from the board after Jason Ravnsborg, the state's former Republican attorney general, filed complaints that stemmed from media reports on Noem's actions in office. She has denied any wrongdoing.

After meeting in a closed-door session for one hour Monday, the board voted unanimously to invoke procedures that allow for a contested case hearing to give Noem a chance to publicly defend herself against allegations of "misconduct" related to "conflicts of interest" and "malfeasance." The board also dismissed Ravnsborg's allegations that Noem misused state funds in the episode.

However, the retired judges left it unclear how they will proceed. Lori Wilbur, the board chair, said the complaint was "partially dismissed and partially closed," but added that the complaint could be reopened. She declined to discuss what would cause the board to reopen the complaint.

Wednesday, August 24, 2022

Dual use of artifcial-intelligence-powered drug discovery

Urbina, F., Lentzos, F., Invernizzi, C. et al. 
Nat Mach Intell 4, 189–191 (2022). 
https://doi.org/10.1038/s42256-022-00465-9

The Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory— convenes the ‘convergence’ conference series set up by the Swiss government to identify developments in chemistry, biology and enabling technologies that may have implications for the Chemical and Biological Weapons Conventions. Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories, to think through potential security implications and to consider how these implications can most effectively be managed internationally.  The meeting convenes for three days of discussion on the possibilities of harm, should the intent be there, from cutting-edge chemical and biological technologies.  Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.

Risk of misuse

The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting.  Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.

(cut)

Broader effects on society

There is a need for discussions across traditional boundaries and multiple disciplines to allow for a fresh look at AI for de novo design and related technologies from different perspectives and with a wide variety of mindsets. Here, we give some recommendations that we believe will reduce potential dual-use concerns for AI in drug discovery. Scientific conferences, such as the Society of Toxicology and American Chemical Society, should actively foster a dialogue among experts from industry, academia and policy making on the implications of our computational tools.

Tuesday, August 23, 2022

Tackling Implicit Bias in Health Care

J. A. Sabin
N Engl J Med 2022; 387:105-107
DOI: 10.1056/NEJMp2201180

Implicit and explicit biases are among many factors that contribute to disparities in health and health care. Explicit biases, the attitudes and assumptions that we acknowledge as part of our personal belief systems, can be assessed directly by means of self-report. Explicit, overtly racist, sexist, and homophobic attitudes often underpin discriminatory actions. Implicit biases, by contrast, are attitudes and beliefs about race, ethnicity, age, ability, gender, or other characteristics that operate outside our conscious awareness and can be measured only indirectly. Implicit biases surreptitiously influence judgment and can, without intent, contribute to discriminatory behavior. A person can hold explicit egalitarian beliefs while harboring implicit attitudes and stereotypes that contradict their conscious beliefs.

Moreover, our individual biases operate within larger social, cultural, and economic structures whose biased policies and practices perpetuate systemic racism, sexism, and other forms of discrimination. In medicine, bias-driven discriminatory practices and policies not only negatively affect patient care and the medical training environment, but also limit the diversity of the health care workforce, lead to inequitable distribution of research funding, and can hinder career advancement.

A review of studies involving physicians, nurses, and other medical professionals found that health care providers’ implicit racial bias is associated with diagnostic uncertainty and, for Black patients, negative ratings of their clinical interactions, less patient-centeredness, poor provider communication, undertreatment of pain, views of Black patients as less medically adherent than White patients, and other ill effects.1 These biases are learned from cultural exposure and internalized over time: in one study, 48.7% of U.S. medical students surveyed reported having been exposed to negative comments about Black patients by attending or resident physicians, and those students demonstrated significantly greater implicit racial bias in year 4 than they had in year 1.

A review of the literature on reducing implicit bias, which examined evidence on many approaches and strategies, revealed that methods such as exposure to counterstereotypical exemplars, recognizing and understanding others’ perspectives, and appeals to egalitarian values have not resulted in reduction of implicit biases.2 Indeed, no interventions for reducing implicit biases have been shown to have enduring effects. Therefore, it makes sense for health care organizations to forgo bias-reduction interventions and focus instead on eliminating discriminatory behavior and other harms caused by implicit bias.

Though pervasive, implicit bias is hidden and difficult to recognize, especially in oneself. It can be assumed that we all hold implicit biases, but both individual and organizational actions can combat the harms caused by these attitudes and beliefs. Awareness of bias is one step toward behavior change. There are various ways to increase our awareness of personal biases, including taking the Harvard Implicit Association Tests, paying close attention to our own mistaken assumptions, and critically reflecting on biased behavior that we engage in or experience. Gonzalez and colleagues offer 12 tips for teaching recognition and management of implicit bias; these include creating a safe environment, presenting the science of implicit bias and evidence of its influence on clinical care, using critical reflection exercises, and engaging learners in skill-building exercises and activities in which they must embrace their discomfort.

Monday, August 22, 2022

Meta-Analysis of Inequality Aversion Estimates

Nunnari, S., & Pozzi, M. (2022).
SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.4169385

Abstract

Loss aversion is one of the most widely used concepts in behavioral economics. We conduct a large-scale interdisciplinary meta-analysis, to systematically accumulate knowledge from numerous empirical estimates of the loss aversion coefficient reported during the past couple of decades. We examine 607 empirical estimates of loss aversion from 150 articles in economics, psychology, neuroscience, and several other disciplines. Our analysis indicates that the mean loss aversion coefficient is between 1.8 and 2.1. We also document how reported estimates vary depending on the observable characteristics of the study design.

Conclusion

In this paper, we reported the results of a meta-analysis of empirical estimates of the inequality aversion coefficients in models of outcome-based other-regarding preferences `a la Fehr and Schmidt (1999). We conduct both a frequentist analysis (using a multi-level random-effects model) and a Bayesian analysis (using a Bayesian hierarchical model) to provide a “weighted average” for α and β. The results from the two approaches are nearly identical and support the hypothesis of inequality concerns. From the frequentist analysis, we learn that the mean envy coefficient is 0.425 with a 95% confidence interval of [0.244, 0.606]; the mean guilt coefficient is, instead, 0.291 with a 95% confidence interval [0.218, 0.363]. This means that, on average, an individual is willing to spend € 0.41 to increase others’ earnings by €1 when ahead, and € 0.74 to decrease others’ earnings by €1 when behind. The theoretical assumptions α ≥ β and 0 ≤ β < 1 are upheld in our empirical analysis, but we cannot conclude that the disadvantageous inequality coefficient is statistically greater than the coefficient for advantageous inequality. We also observe no correlation between the two parameters.

Sunday, August 21, 2022

Medial and orbital frontal cortex in decision-making and flexible behavior

Klein-Flügge, M. C., Bongioanni, A., & 
Rushworth, M. F. (2022).
Neuron.
https://doi.org/10.1016/j.neuron.2022.05.022

Summary

The medial frontal cortex and adjacent orbitofrontal cortex have been the focus of investigations of decision-making, behavioral flexibility, and social behavior. We review studies conducted in humans, macaques, and rodents and argue that several regions with different functional roles can be identified in the dorsal anterior cingulate cortex, perigenual anterior cingulate cortex, anterior medial frontal cortex, ventromedial prefrontal cortex, and medial and lateral parts of the orbitofrontal cortex. There is increasing evidence that the manner in which these areas represent the value of the environment and specific choices is different from subcortical brain regions and more complex than previously thought. Although activity in some regions reflects distributions of reward and opportunities across the environment, in other cases, activity reflects the structural relationships between features of the environment that animals can use to infer what decision to take even if they have not encountered identical opportunities in the past.

Summary

Neural systems that represent the value of the environment exist in many vertebrates. An extended subcortical circuit spanning the striatum, midbrain, and brainstem nuclei of mammals corresponds to these ancient systems. In addition, however, mammals possess several frontal cortical regions concerned with guidance of decision-making and adaptive, flexible behavior. Although these frontal systems interact extensively with these subcortical circuits, they make specific contributions to behavior and also influence behavior via other cortical routes. Some areas such as the ACC, which is present in a broad range of mammals, represent the distribution of opportunities in an environment over space and time, whereas other brain regions such as amFC and dmPFC have roles in representing structural associations and causal links between environmental features, including aspects of the social environment (Figure 8). Although the origins of these areas and their functions are traceable to rodents, they are especially prominent in primates. They make it possible not just to select choices on the basis of past experience of identical situations, but to make inferences to guide decisions in new scenarios.