Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, August 31, 2023

It’s not only political conservatives who worry about moral purity

K. Gray, W. Blakey, & N. DiMaggio
psychce.co
Originally posted 13 July 23

Here are two excerpts:

What does this have to do with differences in moral psychology? Well, moral psychologists have suggested that politically charged arguments about sexuality, spirituality and other subjects reflect deep differences in the moral values of liberals and conservatives. Research involving scenarios like this one has seemed to indicate that conservatives, unlike liberals, think that maintaining ‘purity’ is a moral good in itself – which for them might mean supporting what they construe as the ‘sanctity of marriage’, for example.

It may seem strange to think about ‘purity’ as a core driver of political differences. But purity, in the moral sense, is an old concept. It pops up in the Hebrew Bible a lot, in taboos around food, menstruation, and divine encounters. When Moses meets God at the Burning Bush, God says to Moses: ‘Do not come any closer, take off your sandals, for the place where you are standing is holy ground.’ Why does God tell Moses to take off his shoes? Not because his shoes magically hurt God, but because shoes are dirty, and it’s disrespectful to wear your shoes in the presence of the creator of the universe. Similarly, in ancient Greece, worshippers were often required to endure long purification rituals before looking at sacred religious idols or engaging in different spiritual rites. These ancient moral practices seem to reflect an intuition that ‘cleanliness is next to Godliness’.

In the modern era, purity has repeatedly appeared at the centre of political battlegrounds, as in clashes between US conservatives and liberals over sexual education and mores in the 1990s. It was around this time that the psychologist Jonathan Haidt began formulating a theory to help explain the moral divide. Moral foundations theory argues that liberals and conservatives are divided because they rely on distinct moral values, including purity, to different degrees.

(cut)

A harm-focused perspective on moral judgments related to ‘purity’ could help us better understand and communicate with moral opponents. We all grasp the importance of protecting ourselves and our loved ones from harm. Learning that people on the ‘other side’ of a political divide care about questions of purity because they connect these to their understanding of harm can help us empathise with different moral opinions. It is easy for a liberal to dismiss a conservative’s condemnation of dead-chicken sex when it is merely said to be ‘impure’; it is harder to be dismissive if it’s suggested that someone who makes a habit of that behaviour might end up harming people.

Explicitly grounding discussions of morality in perceptions of harm could help us all to be better citizens of a ‘small-L liberal’ society – one in which the right to swing our fists ends where others’ noses begin. If something seems disgusting, impure and immoral to you, take some time to try to articulate the harms you intuitively perceive. Talking about these potential harms may help other people understand where you are coming from. Of course, someone might not share your judgment that harm is being done. But identifying perceived harms at least puts the conversation in terms that everyone understands.


Here is my summary:

The authors define purity as "the state of being free from contamination or pollution."  They argue that people on both the left and the right care about purity because they associate it with safety and well-being.
They provide examples of how liberals and conservatives can both use purity-related language, such as "desecrate" and "toxic." They propose a new explanation of moral judgments that suggests that people care about purity when they perceive that 'impure' acts can lead to harm.

Wednesday, August 30, 2023

Not all skepticism is “healthy” skepticism: Theorizing accuracy- and identity-motivated skepticism toward social media misinformation

Li, J. (2023). 
New Media & Society, 0(0). 

Abstract

Fostering skepticism has been seen as key to addressing misinformation on social media. This article reveals that not all skepticism is “healthy” skepticism by theorizing, measuring, and testing the effects of two types of skepticism toward social media misinformation: accuracy- and identity-motivated skepticism. A two-wave panel survey experiment shows that when people’s skepticism toward social media misinformation is driven by accuracy motivations, they are less likely to believe in congruent misinformation later encountered. They also consume more mainstream media, which in turn reinforces accuracy-motivated skepticism. In contrast, when skepticism toward social media misinformation is driven by identity motivations, people not only fall for congruent misinformation later encountered, but also disregard platform interventions that flag a post as false. Moreover, they are more likely to see social media misinformation as favoring opponents and intentionally avoid news on social media, both of which form a vicious cycle of fueling more identity-motivated skepticism.

Discussion

I have made the case that it is important to distinguish between accuracy-motivated skepticism and identity-motivated skepticism. They are empirically distinguishable constructs that cast opposing effects on outcomes important for a well-functioning democracy. Across the board, accuracy-motivated skepticism produces normatively desirable outcomes. Holding a higher level of accuracy-motivated skepticism makes people less likely to believe in congruent misinformation they encounter later, offering hope that partisan motivated reasoning can be attenuated. Accuracy-motivated skepticism toward social media misinformation also has a mutually reinforcing relationship with consuming news from mainstream media, which can serve to verify information on social media and produce potential learning effects.

In contrast, not all skepticism is “healthy” skepticism. Holding a higher level of identity-motivated skepticism not only increases people’s susceptibility to congruent misinformation they encounter later, but also renders content flagging by social media platforms less effective. This is worrisome as calls for skepticism and platform content moderation have been a crucial part of recently proposed solutions to misinformation. Further, identity-motivated skepticism reinforces perceived bias of misinformation and intentional avoidance of news on social media. These can form a vicious cycle of close-mindedness and politicization of misinformation.

This article advances previous understanding of skepticism by showing that beyond the amount of questioning (the tipping point between skepticism and cynicism), the type of underlying motivation matters for whether skepticism helps people become more informed. By bringing motivated reasoning and media skepticism into the same theoretical space, this article helps us make sense of the contradictory evidence on the utility of media skepticism. Skepticism in general should not be assumed to be “healthy” for democracy. When driven by identity motivations, skepticism toward social media misinformation is counterproductive for political learning; only when skepticism toward social media is driven by the accuracy motivations does it inoculate people against favorable falsehoods and encourage consumption of credible alternatives.


Here are some additional thoughts on the research:
  • The distinction between accuracy-motivated skepticism and identity-motivated skepticism is a useful one. It helps to explain why some people are more likely to believe in misinformation than others.
  • The findings of the studies suggest that interventions that promote accuracy-motivated skepticism could be effective in reducing the spread of misinformation on social media.
  • It is important to note that the research was conducted in the United States. It is possible that the findings would be different in other countries.

Tuesday, August 29, 2023

Yale University settles lawsuit alleging it pressured students with mental health issues to withdraw

Associated Press
Originally posted 25 Aug 23

Yale University and a student group announced Friday that they've reached a settlement in a federal lawsuit that accused the Ivy League school of discriminating against students with mental health disabilities, including pressuring them to withdraw.

Under the agreement, Yale will modify its policies regarding medical leaves of absence, including streamlining the reinstatement process for students who return to campus. The student group, which also represents alumni, had argued the process was onerous, discouraging students for decades from taking medical leave when they needed it most.

The settlement is a “watershed moment” for the university and mental health patients, said 2019 graduate Rishi Mirchandani, a co-founder of Elis for Rachael, the group that sued. It was formed to help students with mental health issues in honor of a Yale student who took her own life.

“This historic settlement affirms that students with mental health needs truly belong," Mirchandani said.

A joint statement from Elis for Rachael and Yale, released on Friday, confirmed the agreement "to resolve a lawsuit filed last November in federal district court related to policies and practices impacting students with mental health disabilities.”

Under the agreement, Yale will allow students to study part-time if they have urgent medical needs. Elis for Rachael said it marks the first time the university has offered such an option. Students granted the accommodation at the beginning of a new term will receive a 50% reduction in tuition.

“Although Yale describes the circumstances for this accommodation as ‘rare,’ this change still represents a consequential departure from the traditional all-or-nothing attitude towards participation in academic life at Yale,” the group said in a statement.

The dean of Yale College, Pericles Lewis, said he was “pleased with today’s outcome.”


The potential good news: The lawsuit against Yale is a step towards ensuring that students with mental health disabilities have the same opportunities as other students. It is also a reminder that colleges and universities have a responsibility to create a supportive environment for all students, regardless of their mental health status.

Monday, August 28, 2023

'You can't bullshit a bullshitter' (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information

Littrell, S., Risko, E. F., & Fugelsang, J. A. (2021).
The British journal of social psychology, 60(4), 
1484–1505.

Abstract

Research into both receptivity to falling for bullshit and the propensity to produce it have recently emerged as active, independent areas of inquiry into the spread of misleading information. However, it remains unclear whether those who frequently produce bullshit are inoculated from its influence. For example, both bullshit receptivity and bullshitting frequency are negatively related to cognitive ability and aspects of analytic thinking style, suggesting that those who frequently engage in bullshitting may be more likely to fall for bullshit. However, separate research suggests that individuals who frequently engage in deception are better at detecting it, thus leading to the possibility that frequent bullshitters may be less likely to fall for bullshit. Here, we present three studies (N = 826) attempting to distinguish between these competing hypotheses, finding that frequency of persuasive bullshitting (i.e., bullshitting intended to impress or persuade others) positively predicts susceptibility to various types of misleading information and that this association is robust to individual differences in cognitive ability and analytic cognitive style.

Conclusion

Gaining a better understanding of the differing ways in which various types of misleading information are transmitted and received is becoming increasingly important in the information age (Kristansen & Kaussler, 2018). Indeed, an oft-repeated maxim in popular culture is, “you can’t bullshit a bullshitter.” While folk wisdom may assert that this is true, the present investigation suggests that the reality is a bit more complicated. Our primary aim was to examine the extent to which bullshitting frequency is associated with susceptibility to falling for bullshit. Overall, we found that persuasive bullshitters (but not evasive bullshitters) were more receptive to various types of bullshit and, in the case of pseudo-profound statements, even when controlling for factors related to intelligence and analytic thinking. These results enrich our understanding of the transmission and detection of certain types of misleading information, specifically the associations between the propensity to produce and the tendency to fall for bullshit and will help to inform future research in this growing area of scholarship.



Sunday, August 27, 2023

Ontario court rules against Jordan Peterson, upholds social media training order

Canadian Broadcasting Company
Originally posted 23 August 23

An Ontario court ruled against psychologist and media personality Jordan Peterson Wednesday, and upheld a regulatory body's order that he take social media training in the wake of complaints about his controversial online posts and statements.

Last November, Peterson, a professor emeritus with the University of Toronto psychology department who is also an author and media commentator, was ordered by the College of Psychologists of Ontario to undergo a coaching program on professionalism in public statements.

That followed numerous complaints to the governing body of Ontario psychologists, of which Peterson is a member, regarding his online commentary directed at politicians, a plus-sized model, and transgender actor Elliot Page, among other issues. You can read more about those social media posts here.

The college's complaints committee concluded his controversial public statements could amount to professional misconduct and ordered Peterson to pay for a media coaching program — noting failure to comply could mean the loss of his licence to practice psychology in the province.

Peterson filed for a judicial review, arguing his political commentary is not under the college's purview.

Three Ontario Divisional Court judges unanimously dismissed Peterson's application, ruling that the college's decision falls within its mandate to regulate the profession in the public interest and does not affect his freedom of expression.

"The order is not disciplinary and does not prevent Dr. Peterson from expressing himself on controversial topics; it has a minimal impact on his right to freedom of expression," the decision written by Justice Paul Schabas reads, in part.



My take:

Peterson has argued that the order violates his right to free speech. He has also said that the complaints against him were politically motivated. However, the court ruled that the college's order was justified in order to protect the public from harm.

The case of Jordan Peterson is a reminder that psychologists, like other human beings, are not infallible. They are capable of making mistakes and of expressing harmful views. It is important to hold psychologists accountable for their actions, and to ensure that they are held to the highest ethical standards.

In addition to the steps outlined above, there are a number of other things that can be done to mitigate bias in psychology. These include:
  • Increasing diversity in the field of psychology
  • Promoting critical thinking and self-reflection among psychologists
  • Developing more specific ethical guidelines for psychologists' use of social media
  • Holding psychologists accountable for their online behavior

Saturday, August 26, 2023

Can Confirmation Bias Improve Group Learning?

Gabriel, N. and O'Connor, C. (2022)
[Preprint]

Abstract

Confirmation bias has been widely studied for its role in failures of reasoning. Individuals exhibiting confirmation bias fail to engage with information that contradicts their current beliefs, and, as a result, can fail to abandon inaccurate beliefs. But although most investigations of confirmation bias focus on individual learning, human knowledge is typically developed within a social structure. How does the presence of confirmation bias influence learning and the development of consensus within a group? In this paper, we use network models to study this question. We find, perhaps surprisingly, that moderate confirmation bias often improves group learning. This is because confirmation bias leads the group to entertain a wider variety of theories for a longer time, and prevents them from prematurely settling on a suboptimal theory. There is a downside, however, which is that a stronger form of confirmation bias can cause persistent polarization, and hurt the knowledge producing capacity of the community. We discuss implications of these results for epistemic communities, including scientific ones.

Conclusion

We find that confirmation bias, in a more moderate form, improves the epistemic performance of agents in a networked community. This is perhaps surprising given that previous work mostly emphasizes the epistemic harms of confirmation bias. By decreasing the chances that a group pre-emptively settles on a
promising theory or option, confirmation bias can improve the likelihood that the group chooses optimal options in the long run. In this, it can play a similar role to decreased network connectivity or stubbornness (Zollman, 2007, 2010; Wu, 2021). The downside is that more robust confirmation bias, where agents entirely ignore data that is too disconsonant with their current beliefs, can lead to polarization, and harm the epistemic success of a community. Our modeling results thus provide potential support for the arguments of Mercier & Sperber (2017) regarding the benefits of confirmation bias to a group, but also a caution.  Too much confirmation bias does not provide such benefits.

There are several ongoing discussions in philosophy and the social sciences where these results are relevant. Mayo-Wilson et al. (2011) use network models to argue for the independence thesis—that rationality of individual agents and rationality of the groups they form sometimes come apart. I.e., individually rational agents may form groups which are not ideally rational, and rational groups may sometimes consist in individually irrational agents. Our results lend support to this claim. While there is a great deal of evidence suggesting that confirmation bias is not ideal for individual reasoners, our results suggest that it can nonetheless improve group reasoning under the right conditions.


The authors conclude that confirmation bias can have both positive and negative effects on group learning. The key is to find a moderate level of confirmation bias that allows the group to explore a variety of theories without becoming too polarized.

Here are some of the key findings of the paper:
  • Moderate confirmation bias can improve group learning by preventing the group from prematurely settling on a suboptimal theory.
  • Too much confirmation bias can lead to polarization and a decrease in the group's ability to learn.
  • The key to effective group learning is to find a moderate level of confirmation bias.

Friday, August 25, 2023

The cognitive foundations of ideological orthodoxy: Threat avoidance, ingroup mobilization, & signaling

Marie, A., & Petersen, M. (2023, March 10).

Abstract

Political and religious movements often bind around shared mobilizing narratives. In their most devoted activists, this triggers moral motivations to affirm and protect the narrative from being argumentatively challenged (i.e., orthodox mindsets), with free expression and nuance as the primary casualties. The ideological narratives are often threat-based, denouncing an evil or villains encroaching on a sacred value, such as national grandeur, the faith, or class, racial, or gender equality. Their protection triggers repressive reactions ranging from expressions of outrage or public shaming on social media to the “deplatforming” of controversial speakers to censorship and imprisonment of dissidents. Orthodox mindsets are puzzling because of the often disproportionate righteousness with which they try to protect cherished narratives. We suspect that orthodox mindsets may derive from three main evolved cognitive foundations. First, over-sensitive dispositions to detect threat, from human outgroups in particular. Second, motivations to mobilize ingroup members for cooperative benefits and against rival groups by emphasizing goals relevant to everyone. Third, signaling personal devotion to causes one’s allies value to accrue prestige within the ingroup. In line with arguments about self-deception, strategies of ingroup mobilization and signaling may be most likely to meet their evolved functions when displayed by activists sincerely committed to the ideological movement’s tenets.

Highlights:
  • Devout activists of political and religious movements often display moral motivations to repress critiques of mobilizing narratives
  • The protected narratives are often rigid and hard-to-falsify accounts of a social threat
  • We propose that such orthodox mindsets tap at least three evolved cognitive systems: threat  (over) detection and attempts to mobilize the ingroup and acquire status (signaling)
  • Cognitive systems of mobilization and signaling pursue social goals, that may be more likely to be reached when activists endorse the threat-based narrative sincerely, and view its truth as identity-defining
Conclusion

Moral motivations to protect rigid and hard-to-falsify threat-based narratives from contestation is a characteristic feature of many political and religious movements.  Such orthodox mindsets may be rooted in cognitive instincts which function is to try to maintain mobilization of followers for moralized causes—fighting outgroup threats in particular—and to signal devotion to those causes to gain status. Future research should explore further the content properties that make ideological narratives compelling, the balance between their hypothetical functions of mobilizing and signaling, and the factors susceptible to moderate orthodox urges

Thursday, August 24, 2023

The Limits of Informed Consent for an Overwhelmed Patient: Clinicians’ Role in Protecting Patients and Preventing Overwhelm

J. Bester, C.M. Cole, & E. Kodish.
AMA J Ethics. 2016;18(9):869-886.
doi: 10.1001/journalofethics.2016.18.9.peer2-1609.

Abstract

In this paper, we examine the limits of informed consent with particular focus on ways in which various factors can overwhelm decision-making capacity. We introduce overwhelm as a phenomenon commonly experienced by patients in clinical settings and distinguish between emotional overwhelm and informational overload. We argue that in these situations, a clinician’s primary duty is prevention of harm and suggest ways in which clinicians can discharge this obligation. To illustrate our argument, we consider the clinical application of genetic sequencing testing, which involves scientific and technical information that can compromise the understanding and decisional capacity of most patients. Finally, we consider and rebut objections that this could lead to paternalism.

(cut)

Overwhelm and Information Overload

The claim we defend is a simple one: there are medical situations in which the information involved in making a decision is of such a nature that the decision-making capacity of a patient is overwhelmed by the sheer complexity or volume of information at hand. In such cases a patient cannot attain the understanding necessary for informed decision making, and informed consent is therefore not possible. We will support our thesis regarding informational overload by focusing specifically on the area of clinical whole genome sequencing—i.e., identification of an individual’s entire genome, enabling the identification and interaction of multiple genetic variants—as distinct from genetic testing, which tests for specific genetic variants.

We will first present ethical considerations regarding informed consent. Next, we will present three sets of factors that can burden the capacity of a patient to provide informed consent for a specific decision—patient, communication, and information factors—and argue that these factors may in some circumstances make it impossible for a patient to provide informed consent. We will then discuss emotional overwhelm and informational overload and consider how being overwhelmed affects informed consent. Our interest in this essay is mainly in informational overload; we will therefore consider whole genome sequencing as an example in which informational factors overwhelm a patient’s decision-making capacity. Finally, we will offer suggestions as to how the duty to protect patients from harm can be discharged when informed consent is not possible because of emotional overwhelm or informational overload.

(cut)

How should clinicians respond to such situations?

Surrogate decision making. One possible solution to the problem of informed consent when decisional capacity is compromised is to seek a surrogate decision maker. However, in situations of informational overload, this may not solve the problem. If the information has inherent qualities that would overwhelm a reasonable patient, it is likely to also overwhelm a surrogate. Unless the surrogate decision maker is a content expert who also understands the values of the patient, a surrogate decision maker will not solve the problem of informed consent. Surrogate decision making may, however, be useful for the emotionally overwhelmed patient who remains unable to provide informed consent despite additional support.

Shared decision making. Another possible solution is to make use of shared decision making (SDM). This approach relies on deliberation between clinician and patient regarding available health care choices, taking the best evidence into account. The clinician actively involves the patient and elicits patient values. The goal of SDM is often stated as helping patients arrive at informed decisions that respect what matters most to them.

It is not clear, however, that SDM will be successful in facilitating informed decisions when an informed consent process has failed. SDM as a tool for informed decision making is at its core dependent on the patient understanding the options presented and being able to describe the preferred option. Understanding and deliberating about what is at stake for each option is a key component of this use of SDM. Therefore, if the medical information is so complex that it overloads the patient’s decision-making capacity, SDM is unlikely to achieve informed decision making. But if a patient is emotionally overwhelmed by the illness experience and all that accompanies it, a process of SDM and support for the patient may eventually facilitate informed decision making.

Wednesday, August 23, 2023

Excess Death Rates for Republican and Democratic Registered Voters in Florida and Ohio During the COVID-19 Pandemic

Wallace J, Goldsmith-Pinkham P, Schwartz JL. 
JAMA Intern Med. 
Published online July 24, 2023.
doi:10.1001/jamainternmed.2023.1154

Key Points

Question

Was political party affiliation a risk factor associated with excess mortality during the COVID-19 pandemic in Florida and Ohio?

Findings

In this cohort study evaluating 538 159 deaths in individuals aged 25 years and older in Florida and Ohio between March 2020 and December 2021, excess mortality was significantly higher for Republican voters than Democratic voters after COVID-19 vaccines were available to all adults, but not before. These differences were concentrated in counties with lower vaccination rates, and primarily noted in voters residing in Ohio.

Meaning

The differences in excess mortality by political party affiliation after COVID-19 vaccines were available to all adults suggest that differences in vaccination attitudes and reported uptake between Republican and Democratic voters may have been a factor in the severity and trajectory of the pandemic in the US.


My Take

Beliefs are a powerful force that can influence our health behaviors. Our beliefs about health, illness, and the causes of disease can shape our decisions about what we eat, how much we exercise, and whether or not we see a doctor when we're sick.

There is a growing body of research that suggests that beliefs can have a significant impact on health outcomes. For example, one study found that people who believe that they have a strong sense of purpose in life tend to live longer than those who do not. Another study found that people who believe in a higher power tend to be more optimistic and have a more positive outlook on life, which can lead to better mental health, which can in turn have a positive impact on physical health.  However, certain beliefs may be harmful to health and longevity.

The study suggest that beliefs may play a role in the relationship between political party affiliation and excess death rates. For example, Republicans are more likely to hold beliefs that are associated with vaccine hesitancy, such as distrust of government and the medical establishment. These beliefs may have contributed to the lower vaccination rates among Republican-registered voters, which in turn may have led to higher excess death rates.

Tuesday, August 22, 2023

The (moral) language of hate

Brendan Kennedy et al.
PNAS Nexus, Volume 2,
Issue 7, July 2023, 210

Abstract

Humans use language toward hateful ends, inciting violence and genocide, intimidating and denigrating others based on their identity. Despite efforts to better address the language of hate in the public sphere, the psychological processes involved in hateful language remain unclear. In this work, we hypothesize that morality and hate are concomitant in language. In a series of studies, we find evidence in support of this hypothesis using language from a diverse array of contexts, including the use of hateful language in propaganda to inspire genocide (Study 1), hateful slurs as they occur in large text corpora across a multitude of languages (Study 2), and hate speech on social-media platforms (Study 3). In post hoc analyses focusing on particular moral concerns, we found that the type of moral content invoked through hate speech varied by context, with Purity language prominent in hateful propaganda and online hate speech and Loyalty language invoked in hateful slurs across languages. Our findings provide a new psychological lens for understanding hateful language and points to further research into the intersection of morality and hate, with practical implications for mitigating hateful rhetoric online.

Significance Statement

Only recently have researchers begun to propose that violence and prejudice may have roots in moral intuitions. Can it be the case, we ask, that the act of verbalizing hatred involves a moral component, and that hateful and moral language are inseparable constructs? Across three studies focusing on the language of morality and hate, including historical text analysis of Nazi propaganda, implicit associations across 25 languages, and extremist right-wing communications on social media, we demonstrate that moral language, and specifically, Purity-related language (i.e. language about physical purity, avoidance of disgusting things, and resisting our carnal desires in favor of a higher, divine nature) and Loyalty related language are concomitant with hateful and exclusionary language.

-----------------

Here are some of the key findings of the study:
  • Hateful language is often associated with moral foundations such as purity, loyalty, and authority.
  • The type of moral content invoked through hate speech varies by context.
  • Purity language is prominent in hateful propaganda and online hate speech.
  • Loyalty language is invoked in hateful slurs across languages.
  • Authority language is invoked in hateful rhetoric that targets political figures or institutions.
The study's findings have important implications for understanding and mitigating hate speech.  By understanding the moral foundations that underlie hateful language, we can develop more effective strategies for countering it. For example, we can challenge the moral claims made by hate speech and offer alternative moral frameworks that promote tolerance and understanding.

Monday, August 21, 2023

Cigna Accused of Using AI, Not Doctors, to Deny Claims: Lawsuit

Steph Weber
Medscape.com
Originally posted 4 August 23

A new lawsuit alleges that Cigna uses artificial intelligence (AI) algorithms to inappropriately deny "hundreds or thousands" of claims at a time, bypassing legal requirements to complete individual claim reviews and forcing providers to bill patients in full.

In a complaint filed last week in California's eastern district court, plaintiffs and Cigna health plan members Suzanne Kisting-Leung and Ayesha Smiley and their attorneys say that Cigna violates state insurance regulations by failing to conduct a "thorough, fair, and objective" review of their and other members' claims.

The lawsuit says that instead, Cigna relies on an algorithm, PxDx, to review and frequently deny medically necessary claims. According to court records, the system allows Cigna's doctors to "instantly reject claims on medical grounds without ever opening patient files." With use of the system, the average claims processing time is 1.2 seconds.

Cigna says it uses technology to verify coding on standard, low-cost procedures and to expedite physician reimbursement. In a statement to CBS News, the company called the lawsuit "highly questionable."

The case highlights growing concerns about AI and its ability to replace humans for tasks and interactions in healthcare, business, and beyond. Public advocacy law firm Clarkson, which is representing the plaintiffs, has previously sued tech giants Google and ChatGPT creator OpenAI for harvesting internet users' personal and professional data to train their AI systems.

According to the complaint, Cigna denied the plaintiffs medically necessary tests, including bloodwork to screen for vitamin D deficiency and ultrasounds for patients suspected of having ovarian cancer. The plaintiffs' attempts to appeal were unfruitful, and they were forced to pay out of pocket.

(cut)

Last year, the American Medical Association and two state physician groups joined another class action against Cigna stemming from allegations that the insurer's intermediary, Multiplan, intentionally underpaid medical claims. And in March, Cigna's pharmacy benefit manager (PBM), Express Scripts, was accused of conspiring with other PBMs to drive up prescription drug prices for Ohio consumers, violating state antitrust laws.

Cohen says he expects Cigna to push back in court about the California class size, which the plaintiff's attorneys hope will encompass all Cigna health plan members in the state.

Sunday, August 20, 2023

When Scholars Sue Their Accusers. Francesca Gino is the Latest. Such Litigation Rarely Succeeds.

Adam Marcus and Ivan Oransky
The Chronicle of Higher Education
Originally posted 18 AUG 23

Francesca Gino has made headlines twice since June: once when serious allegations of misconduct involving her work became public, and again when she filed a $25-million lawsuit against her accusers, including Harvard University, where she is a professor at the business school.

The suit itself met with a barrage of criticism from those who worried that, as one scientist put it, it would have a “chilling effect on fraud detection.” A smaller number of people supported the move, saying that Harvard and her accusers had abandoned due process and that they believed in Gino’s integrity.How the case will play out, of course, remains to be seen. But Gino is hardly the first researcher to sue her critics and her employer when faced with misconduct findings. As the founders of Retraction Watch, a website devoted to covering problems in the scientific literature, we’ve reported many of these kinds of cases since we launched our blog in 2010. Platintiffs tend to claim defamation, but sometimes sue over wrongful termination or employment discrimination, and these kinds of cases typically end up in federal courts. A look at how some other suits fared might yield recommendations for how to limit the pain they can cause.The first thing to know about defamation and employment suits is that most plaintiffs, but not all, lose. Mario Saad, a diabetes researcher at Brazil’s Unicamp, found that out when he sued the American Diabetes Association in the very same federal district court in Massachusetts where Gino filed her case.Saad was trying to prevent Diabetes, the flagship research journal of the American Diabetes Association, from publishing expressions of concern about four of his papers following allegations of image manipulation. He lost that effort in 2015, and has now had 18 papers retracted.

(cut)

Such cases can be extremely expensive — not only for the defense, whether the costs are borne by institutions or insurance companies, but also for the plaintiffs. Ask Carlo Croce and Mark Jacobson.

Croce, a cancer researcher at Ohio State University, has at various points sued The New York Times, a Purdue University biologist named David Sanders, and Ohio State. He has lost all of those cases, including on appeal. The suits against the Times and Sanders claimed that a front-page story in 2017 that quoted Sanders had defamed Croce. His suit against Ohio State alleged that he had been improperly removed as department chair.

Croce racked up some $2 million in legal bills — and was sued for nonpayment. A judge has now ordered Croce’s collection of old masters paintings to be seized and sold for the benefit of his lawyers, and has also garnished Croce’s bank accounts. Another judgment means that his lawyers may now foreclose on his house to recoup their costs. Ohio State has been garnishing his wages since March by about $15,600 each month, or about a quarter of his paycheck. He continues to earn more than $800,000 per year from the university, even after a professorship and the chair were taken away from him.

When two researchers published a critique of the work of Mark Jacobson, an energy researcher at Stanford University, in the Proceedings of the National Academy of Sciences, Jacobson sued them along with the journal’s publisher for $10 million. He dropped the case just months after filing it.

But thanks to a so-called anti-SLAPP statute, “designed to provide for early dismissal of meritless lawsuits filed against people for the exercise of First Amendment rights,” a judge has ordered Jacobson to pay $500,000 in legal fees to the defendants. Jacobson wants Stanford to pay those costs, and California’s labor commissioner said the university had to pay at least some of them because protecting his reputation was part of Jacobson’s job. The fate of those fees, and who will pay them, is up in the air, with Jacobson once again appealing the judgment against him.

Saturday, August 19, 2023

Reverse-engineering the self

Paul, L., Ullman, T. D., De Freitas, J., & Tenenbaum, J.
(2023, July 8). PsyArXiv
https://doi.org/10.31234/osf.io/vzwrn

Abstract

To think for yourself, you need to be able to solve new and unexpected problems. This requires you to identify the space of possible environments you could be in, locate yourself in the relevant one, and frame the new problem as it exists relative to your location in this new environment. Combining thought experiments with a series of self-orientation games, we explore the way that intelligent human agents perform this computational feat by “centering” themselves: orienting themselves perceptually and cognitively in an environment, while simultaneously holding a representation of themselves as an agent in that environment. When faced with an unexpected problem, human agents can shift their perceptual and cognitive center from one location in a space to another, or “re-center”, in order to reframe a problem, giving them a distinctive type of cognitive flexibility. We define the computational ability to center (and re-center) as “having a self,” and propose that implementing this type of computational ability in machines could be an important step towards building a truly intelligent artificial agent that could “think for itself”. We then develop a conceptually robust, empirically viable, engineering-friendly implementation of our proposal, drawing on well established frameworks in cognition, philosophy, and computer science for thinking, planning, and agency.


The computational structure of the self is a key component of human intelligence. They propose a framework for reverse-engineering the self, drawing on work in cognition, philosophy, and computer science.

The authors argue that the self is a computational agent that is able to learn and think for itself. This agent has a number of key abilities, including:
  • The ability to represent the world and its own actions.
  • The ability to plan and make decisions.
  • The ability to learn from experience.
  • The ability to have a sense of self.
The authors argue that these abilities can be modeled as a POMDP, a type of mathematical model that is used to represent sequential decision-making processes in which the decision-maker does not have complete information about the environment. They propose a number of methods for reverse-engineering the self, including:
  • Using data from brain imaging studies to identify the neural correlates of self-related processes.
  • Using computational models of human decision-making to test hypotheses about the computational structure of the self.
  • Using philosophical analysis to clarify the nature of self-related concepts.
The authors argue that reverse-engineering the self is a promising approach to understanding human intelligence and developing artificial intelligence systems that are capable of thinking for themselves.

Friday, August 18, 2023

Evidence for Anchoring Bias During Physician Decision-Making

Ly, D. P., Shekelle, P. G., & Song, Z. (2023).
JAMA Internal Medicine, 183(8), 818.
https://doi.org/10.1001/jamainternmed.2023.2366

Abstract

Introduction

Cognitive biases are hypothesized to influence physician decision-making, but large-scale evidence consistent with their influence is limited. One such bias is anchoring bias, or the focus on a single—often initial—piece of information when making clinical decisions without sufficiently adjusting to later information.

Objective

To examine whether physicians were less likely to test patients with congestive heart failure (CHF) presenting to the emergency department (ED) with shortness of breath (SOB) for pulmonary embolism (PE) when the patient visit reason section, documented in triage before physicians see the patient, mentioned CHF.

Design, Setting, and Participants

In this cross-sectional study of 2011 to 2018 national Veterans Affairs data, patients with CHF presenting with SOB in Veterans Affairs EDs were included in the analysis. Analyses were performed from July 2019 to January 2023.

Conclusions and Relevance

In this cross-sectional study among patients with CHF presenting with SOB, physicians were less likely to test for PE when the patient visit reason that was documented before they saw the patient mentioned CHF. Physicians may anchor on such initial information in decision-making, which in this case was associated with delayed workup and diagnosis of PE.

Here is the conclusion of the paper:

In conclusion, among patients with CHF presenting to the ED with SOB, we find that ED physicians were less likely to test for PE when the initial reason for visit, documented before the physician's evaluation, specifically mentioned CHF. These results are consistent with physicians anchoring on initial information. Presenting physicians with the patient’s general signs and symptoms, rather than specific diagnoses, may mitigate this anchoring. Other interventions include refining knowledge of findings that distinguish between alternative diagnoses for a particular clinical presentation.

Quick snapshot:

Anchoring bias is a cognitive bias that causes us to rely too heavily on the first piece of information we receive when making a decision. This can lead us to make inaccurate or suboptimal decisions, especially when the initial information is not accurate or relevant.

The findings of this study suggest that anchoring bias may be a significant factor in physician decision-making. This could lead to delayed or missed diagnoses, which could have serious consequences for patients.

Thursday, August 17, 2023

Delusion-like beliefs and data quality: Are classic cognitive biases artifacts of carelessness?

Sulik, J., Ross, R. M., Balzan, R., & McKay, R. (2023). 
Journal of Psychopathology and Clinical Science.

Abstract

There is widespread agreement that delusions in clinical populations and delusion-like beliefs in the general population are, in part, caused by cognitive biases. Much of the evidence comes from two influential tasks: the Beads Task and the Bias Against Disconfirmatory Evidence Task. However, research using these tasks has been hampered by conceptual and empirical inconsistencies. In an online study, we examined relationships between delusion-like beliefs in the general population and cognitive biases associated with these tasks. Our study had four key strengths: A new animated Beads Task designed to reduce task miscomprehension, several data-quality checks to identify careless responders, a large sample (n = 1,002), and a preregistered analysis plan. When analyzing the full sample, our results replicated classic relationships between cognitive biases and delusion-like beliefs. However, when we removed 82 careless participants from the analyses (8.2% of the sample) we found that many of these relationships were severely diminished and, in some cases, eliminated outright. These results suggest that some (but not all) seemingly well-established relationships between cognitive biases and delusion-like beliefs might be artifacts of careless responding.

General Scientific Summary

Research suggests that cognitive biases play a key role in the development of delusion-like beliefs. For instance, participants who endorse such beliefs have been reported to “jump to conclusions” when performing abstract data-gathering tasks and to display a “bias against disconfirmatory evidence” when determining the best explanation for a scenario. However, the present study suggests that some (but not all) seemingly well-established relationships between cognitive biases and delusion-like beliefs might, in fact, be spurious—driven by careless responding in a subset of research participants.

And my summary:

The study emphasizes the importance of considering data quality in psychological research, particularly when studying biases associated with delusions. By examining whether these biases result from careless data collection or reflect genuine cognitive processes related to delusions, the research aims to enhance our understanding of the validity and reliability of findings in psychology. The findings have the potential to challenge the interpretation of classic cognitive biases and emphasize the need for careful data collection and analysis in order to ensure accurate and reliable research outcomes. Moreover, the research may contribute to improved diagnosis and treatment of delusional disorders by shedding light on the cognitive mechanisms underlying delusion-like beliefs.

Wednesday, August 16, 2023

A Federal Judge Asks: Does the Supreme Court Realize How Bad It Smells?

Michael Ponsor
The New York Times: Opinion
Originally posted 14 July 23

What has gone wrong with the Supreme Court’s sense of smell?

I joined the federal bench in 1984, some years before any of the justices currently on the Supreme Court. Throughout my career, I have been bound and guided by a written code of conduct, backed by a committee of colleagues I can call on for advice. In fact, I checked with a member of that committee before writing this essay.

A few times in my nearly 40 years on the bench, complaints have been filed against me. This is not uncommon for a federal judge. So far, none have been found to have merit, but all of these complaints have been processed with respect, and I have paid close attention to them.

The Supreme Court has avoided imposing a formal ethical apparatus on itself like the one that applies to all other federal judges. I understand the general concern, in part. A complaint mechanism could become a political tool to paralyze the court or a playground for gadflies. However, a skillfully drafted code could overcome this problem. Even a nonenforceable code that the justices formally pledged to respect would be an improvement on the current void.

Reasonable people may disagree on this. The more important, uncontroversial point is that if there will not be formal ethical constraints on our Supreme Court — or even if there will be — its justices must have functioning noses. They must keep themselves far from any conduct with a dubious aroma, even if it may not breach a formal rule.

The fact is, when you become a judge, stuff happens. Many years ago, as a fairly new federal magistrate judge, I was chatting about our kids with a local attorney I knew only slightly. As our conversation unfolded, he mentioned that he’d been planning to take his 10-year-old to a Red Sox game that weekend but their plan had fallen through. Would I like to use his tickets?

Tuesday, August 15, 2023

Twitter Exec Defends Restoring Account That Shared Child Sex Abuse Material

Matt Novak
Forbes Magazine
Originally published 9 AUG 23

Executives at X, the company formerly known as Twitter, testified in front of an Australian Parliament hearing late Wednesday, and defended the restoration of an X account after it shared child sexual abuse material in late July. The incident attracted widespread attention because X owner Elon Musk personally intervened to reinstate the account after a violation that would normally result in a permanent ban from the social media platform.

Nick Pickles, the head of global government affairs at X, was asked about the incident by an Australian senator late Wednesday ET, early Thursday Australian local time, after Pickles first suggested there was a zero tolerance policy for child sex abuse material before seeming to contradict himself. Pickles said the offending account in question may have been sharing the content “out of outrage.”

“One of the challenges we see is, for example, people sharing this content out of outrage because they want to raise awareness of an issue and see something in the media,” Pickles testified, according to an audio livestream.

“So if there are circumstances where someone shares content but, under review, we decide the appropriate remediation is to remove the content but not the user,” Pickles continued.

There’s nothing in the X terms of service that says it’s okay to share child sexual abuse material if a user is doing it because they’re outraged over the images or looking to “raise awareness.” It’s generally understood that sharing child sex abuse materials, regardless of intent, is not only a federal crime in the U.S. and Australia, but re-victimizes the child.


The article highlights how this decision contradicts ethical principles and moral standards, as sharing such harmful content not only violates the law but also goes against the norms of safeguarding vulnerable individuals, especially children, from harm. Twitter's move to restore the account in question raises concerns about their commitment to combatting online exploitation and maintaining a safe platform for users.

By reinstating an account associated with child sexual abuse material, Twitter appears to have disregarded the severity of the content and its implications. This decision not only undermines trust in the platform but also reflects poorly on the company's dedication to maintaining a responsible and accountable online environment. Critics argue that Twitter's actions in this case highlight a lack of proper content moderation and an insufficient understanding of the gravity of such unethical behavior.

The article sheds light on the potential consequences of platforms not taking immediate and decisive action against users who engage in illegal and immoral activities. This situation serves as a reminder of the broader challenges social media platforms face in balancing issues of free expression with the responsibility to prevent harm and protect users, particularly those who are most vulnerable.

This article points out the company's total and complete failure to uphold ethical and moral standards.

Monday, August 14, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., & Rüther, M. (2023). 
AI and Ethics.

Abstract

How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered.  This assumption is based on a substantial thesis from the
philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy.  This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle.


Full transparency, big John Danaher fan.  Regardless, here is my summary:

Humans are meaning makers. We find meaning in our work, our relationships, and our engagement with the world. The article discusses the potential impact of AI on the meaning of work, and I agree that the authors make some good points. However, I think their solution is somewhat idealistic. It is true that social relationships and engagement with the world can provide us with meaning, but these activities will be difficult to achieve, especially in a world where AI is doing most of the work.  We will need ways to cooperate, achieve, and interact to engage in behaviors that are geared toward super-ordinate goals.  Humans need to align their lives with core human principles, such as meaning-making, pattern repetition, cooperation, and values-based behaviors.
  • The authors focus on the potential impact of AI on the meaning of work, but they also acknowledge that other factors, such as automation and globalization, are also having an impact.
  • The authors' solution is based on the idea that meaning comes from relationships and engagement with the world. However, there are other theories about the meaning of life, such as the idea that meaning comes from self-actualization or from religious faith.
  • The authors acknowledge that their solution is not perfect, but they argue that it is a better alternative than Danaher's solution. However, I think it is important to consider all of the options before deciding which one is best.  Ultimately, it will come down to a values-based decision, as there seems to be no one right or correct solution.

Sunday, August 13, 2023

Beyond killing one to save five: Sensitivity to ratio and probability in moral judgment

Ryazanov, A.A., Wang, S.T, et al. (2023).
Journal of Experimental Social Psychology
Volume 108, September 2023, 104499

Abstract

A great deal of current research on moral judgments centers on moral dilemmas concerning tradeoffs between one and five lives. Whether one considers killing one innocent person to save five others to be morally required or impermissible has been taken to determine whether one is appealing to consequentialist or non-consequentialist reasoning. But this focus on tradeoffs between one and five may obscure more nuanced commitments involved in moral decision-making that are revealed when the numbers and ratio of lives to be traded off are varied, and when the probabilities of each outcome occurring are less than certain. Four studies examine participants' reactions to scenarios that diverge in these ways from the standard ones. Study 1 examines the extent to which people are sensitive to the ratio of lives saved to lives ended by a particular action. Study 2 verifies that the ratio rather than the difference between the two values is operative. Study 3 examines whether participants treat probabilistic harm to some as equivalent to certainly harming fewer, holding expected ratio constant. Study 4 explores an analogous issue regarding the sensitivity of probabilistic saving. Participants are remarkably sensitive to expected ratio for probabilistic harms while deviating from expected value for probabilistic saving. Collectively, the studies provide evidence that people's moral judgments are consistent with the principle of threshold deontology.

General discussion

Collectively, our studies show that people are sensitive to expected ratio in moral dilemmas, and that they show this sensitivity across a range of probabilities. The particular kind of sensitivity to expected value participants display is consistent with the view that people's moral judgments are based on one single principle of threshold deontology. If one examines only participants' reactions to a single dilemma with a given ratio, one might naturally tend to sort participants' judgments into consequentialists (the ones who condone killing to save others) or non-consequentialists (the ones who do not). But this can be misleading, as is shown by the result that the number of participants who make judgments consistent with consequentialism in a scenario with ratio of 5:1 decreases when the ratio decreases (as if a larger number of people endorse deontological principles under this lower ratio). The fact that participants make some judgments that are consistent with consequentialism does not entail that these judgments are expressive of a generally consequentialist moral theory. When the larger set of judgments is taken into account, the only theory with which they are consistent is threshold deontology. On this theory, there is a general deontological constraint against killing, but this constraint is overridden when the consequences of inaction are bad enough. The variability across participants suggests that participants have different thresholds of the ratio at which the consequences count as “bad enough” for switching from supporting inaction to supporting action. This is consistent with the wide literature showing that participants' judgments can shift within the same ratio, depending on, for example, how the death of the one is caused.


My summary:

This research provides new insights into how people make moral judgments. It suggests that people are not simply weighing the number of lives saved against the number of lives lost, but that they are also taking into account the ratio of lives saved to lives lost and the probability of each outcome occurring. This research has important implications for our understanding of moral decision-making and for the development of moral education programs.

Saturday, August 12, 2023

Teleological thinking is driven by aberrant associations

Corlett, P. R. (2023, June 17).
PsyArXiv preprints
https://doi.org/10.31234/osf.io/wgyqs

Abstract

Teleological thought — the tendency to ascribe purpose to objects and events — is useful in some cases (encouraging explanation-seeking), but harmful in others (fueling delusions and conspiracy theories). What drives maladaptive teleological thinking? A fundamental distinction in how we learn causal relationships between events is whether it can be best explained via associations versus via propositional thought. Here, we propose that directly contrasting the contributions of these two pathways can elucidate where teleological thinking goes wrong. We modified a causal learning task such that we could encourage one pathway over another in different instances. Across experiments (total N=600), teleological tendencies were correlated with delusion-like ideas and uniquely explained by aberrant associative learning, but not by learning via propositional rules. Computational modeling suggested that the relationship between associative learning and teleological thinking can be explained by spurious prediction errors that imbue random events with more significance — providing a new understanding for how humans make meaning of lived events.

From the Discussion section

Teleological thinking, in previous work, has been defined in terms of “beliefs”, “social-cognitive biases”, and indeed carries “reasoning” in its very name (as it is used interchangeably with teleological or ‘purpose-based’ reasoning)—which is why it might be surprising to learn of the relationship between teleological thinking and low-level associative learning, and not learning via propositional reasoning.  The key result across experiments can be summarized as such: aberrant prediction errors augured weaker non-additive blocking, which predicted tendencies to engage in teleological thinking, which was consistently correlated with distress from delusional thinking.  This pattern of results was demonstrated in both behavioral and computational modeling data, and withstood even more conservative regression models, 

accounting for the variance explained by other variables.  In other words, the same people who learn more from irrelevant cues or overpredict relationships in the non-additive blocking task (by predicting that cues [that should have been“blocked”] might also cause allergic reactions) tend to also ascribe more purpose to random events —and to experience more distress from delusional beliefs (and thus hold their delusional beliefs in a more patient-like way).


Some thoughts:

The saying "Life is a projective test" suggests that we all see the world through our own unique lens, shaped by our experiences, beliefs, and values. This lens (read as biases) can cause us to make aberrant associations, or to see patterns and connections that are not actually there.

The authors of the paper found that people who are more likely to engage in teleological thinking are also more likely to make aberrant associations. This suggests that our tendency to see the world in a teleological way may be driven by our own biases and assumptions.

In other words, the way we see the world is not always accurate or objective. It is shaped by our own personal experiences and perspectives. This can lead us to make mistakes, or to see things that are not really there.

The next time you are trying to make sense of something, it is important to be aware of your own biases and assumptions, which may help make better choices.