Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Beliefs. Show all posts
Showing posts with label Beliefs. Show all posts

Saturday, December 9, 2023

Physicians’ Refusal to Wear Masks to Protect Vulnerable Patients—An Ethical Dilemma for the Medical Profession

Dorfman D, Raz M, Berger Z.
JAMA Health Forum. 2023;4(11):e233780.
doi:10.1001/jamahealthforum.2023.3780

Here is an excerpt:

In theory, the solution to the problem should be simple: patients who wear masks to protect themselves, as recommended by the CDC, can ask the staff and clinicians to wear a mask as well when seeing them, and the clinicians would oblige given the efficacy masks have shown in reducing the spread of respiratory illnesses. However, disabled patients report physicians and other clinical staff having refused to wear a mask when caring for them. Although it is hard to know how prevalent this phenomenon is, what recourse do patients have? How should health care systems approach clinicians and staff who refuse to mask when treating a disabled patient?

Physicians have a history of antagonism to the idea that they themselves might present a health risk to their patients. Famously, when Hungarian physician Ignaz Semmelweis originally proposed handwashing as a measure to reduce purpureal fever, he was met with ridicule and ostracized from the profession.

Physicians were also historically reluctant to adopt new practices to protect not only patients but also physicians themselves against infection in the midst of the AIDS epidemic. In 1985, the CDC presented its guidance on workplace transmission, instructing physicians to provide care, “regardless of whether HCWs [health care workers] or patients are known to be infected with HTLV-III/LAV [human T-lymphotropic virus type III/lymphadenopathy-associated virus] or HBV [hepatitis B virus].” These CDC guidelines offered universal precautions, common-sense, nonstigmatizing, standardized methods to reduce infection. Yet, some physicians bristled at the idea that they need to take simple, universal public health steps to prevent transmission, even in cases in which infectivity is unknown, and instead advocated for a medicalized approach: testing or masking only in cases when a patient is known to be infected. Such an individualized medicalized approach fails to meet the public health needs of the moment.

Patients are the ones who pay the price for physicians’ objections to changes in practices, whether it is handwashing or the denial of care as an unwarranted HIV precaution. Yet today, with the enactment of disability antidiscrimination law, patients are protected, at least on the books.

As we have written elsewhere, federal law supports the right of a disabled individual to request masking as a reasonable disability accommodation in the workplace and at schools.


Here is my summary:

This article explores the ethical dilemma arising from physicians refusing to wear masks, potentially jeopardizing the protection of vulnerable patients. The author delves into the conflict between personal beliefs and professional responsibilities, questioning the ethical implications of such refusals within the medical profession. The analysis emphasizes the importance of prioritizing patient well-being and public health over individual preferences, calling for a balance between personal freedoms and ethical obligations in healthcare settings.

Wednesday, December 6, 2023

People are increasingly following their heart and not the Bible - poll

Ryan Foley
Christian Today
Originally published 2 DEC 23

A new study reveals that less than one-third of Americans believe the Bible should serve as the foundation for determining right and wrong, even as most people express support for traditional moral values.

The fourth installment of the America's Values Study, released by the Cultural Research Center at Arizona Christian University Tuesday, asked respondents for their thoughts on traditional moral values and what they would like to see as "America's foundation for determining right and wrong." The survey is based on responses from 2,275 U.S. adults collected in July 2022.

Overall, when asked to identify what they viewed as the primary determinant of right and wrong in the U.S., a plurality of participants (42%) said: "what you feel in your heart." An additional 29% cited majority rule as their desired method for determining right and wrong, while just 29% expressed a belief that the principles laid out in the Bible should determine the understanding of right and wrong in the U.S. That figure rose to 66% among Spiritually Active, Governance Engaged Conservative Christians.

The only other demographic subgroups where at least a plurality of respondents indicated a desire for the Bible to serve as the determinant of right and wrong in the U.S. were respondents who attend an evangelical church (62%), self-described Republicans (57%), theologically defined born-again Christians (54%), self-identified conservatives (49%), those who are at least 50 years of age (39%), members of all Protestant congregations (39%), self-identified Christians (38%) and those who attend mainline Protestant churches (36%).

By contrast, an outright majority of respondents who do not identify with a particular faith at all (53%), along with half of LGBT respondents (50%), self-described moderates (47%), political independents (47%), Democrats (46%), self-described liberals (46%) and Catholic Church attendees (46%) maintained that "what you feel in your heart" should form the foundation of what Americans view as right and wrong.

Thursday, November 23, 2023

How to Maintain Hope in an Age of Catastrophe

Masha Gessen
The Atlantic
Originally posted 12 Nov 23

Gessen interviews psychoanalyst and author Robert Jay Lifton.  Here is an excerpt from the beginning of the article/interview:

Lifton is fascinated by the range and plasticity of the human mind, its ability to contort to the demands of totalitarian control, to find justification for the unimaginable—the Holocaust, war crimes, the atomic bomb—and yet recover, and reconjure hope. In a century when humanity discovered its capacity for mass destruction, Lifton studied the psychology of both the victims and the perpetrators of horror. “We are all survivors of Hiroshima, and, in our imaginations, of future nuclear holocaust,” he wrote at the end of “Death in Life.” How do we live with such knowledge? When does it lead to more atrocities and when does it result in what Lifton called, in a later book, “species-wide agreement”?

Lifton’s big books, though based on rigorous research, were written for popular audiences. He writes, essentially, by lecturing into a Dictaphone, giving even his most ambitious works a distinctive spoken quality. In between his five large studies, Lifton published academic books, papers and essays, and two books of cartoons, “Birds” and “PsychoBirds.” (Every cartoon features two bird heads with dialogue bubbles, such as, “ ‘All of a sudden I had this wonderful feeling: I am me!’ ” “You were wrong.”) Lifton’s impact on the study and treatment of trauma is unparalleled. In a 2020 tribute to Lifton in the Journal of the American Psychoanalytic Association, his former colleague Charles Strozier wrote that a chapter in “Death in Life” on the psychology of survivors “has never been surpassed, only repeated many times and frequently diluted in its power. All those working with survivors of trauma, personal or sociohistorical, must immerse themselves in his work.”


Here is my summary of the article and helpful tips.  Happy (hopeful) Thanksgiving!!

Hope is not blind optimism or wishful thinking, but rather a conscious decision to act in the face of uncertainty and to believe in the possibility of a better future. The article/interview identifies several key strategies for cultivating hope, including:
  • Nurturing a sense of purpose: Having a clear sense of purpose can provide direction and motivation, even in the darkest of times. This purpose can be rooted in personal goals, relationships, or a commitment to a larger cause.
  • Engaging in meaningful action: Taking concrete steps, no matter how small, can help to combat feelings of helplessness and despair. Action can range from individual acts of kindness to participation in collective efforts for social change.
  • Cultivating a sense of community: Connecting with others who share our concerns can provide a sense of belonging and support. Shared experiences and collective action can amplify our efforts and strengthen our resolve.
  • Maintaining a critical perspective: While it is important to hold onto hope, it is also crucial to avoid complacency or denial. We need to recognize the severity of the challenges we face and to remain vigilant in our efforts to address them.
  • Embracing resilience: Hope is not about denying hardship or expecting a quick and easy resolution to our problems. Rather, it is about cultivating the resilience to persevere through difficult times and to believe in the possibility of positive change.

The article concludes by emphasizing the importance of hope as a driving force for positive change. Hope is not a luxury, but a necessity for survival and for building a better future. By nurturing hope, we can empower ourselves and others to confront the challenges we face and to work towards a more just and equitable world.

Tuesday, November 21, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Here is my summary:

The authors argue that many different biases, such as the bias blind spot, hostile media bias, egocentric/ethnocentric bias, and outcome bias, can be traced back to the combination of a fundamental prior belief and humans' tendency toward belief-consistent information processing.

Belief-consistent information processing is the process of attending to, interpreting, and remembering information in a way that is consistent with one's existing beliefs. This process can lead to biases when it results in people ignoring or downplaying information that is inconsistent with their beliefs, and giving undue weight to information that is consistent with their beliefs.

The authors propose that different biases can be distinguished by the specific belief that guides information processing. For example, the bias blind spot is characterized by the belief that one is less biased than others, while hostile media bias is characterized by the belief that the media is biased against one's own group. However, the authors also argue that different biases may share the same underlying belief, and differ only in the specific outcome of information processing that is assessed. For example, both the bias blind spot and hostile media bias may involve the belief that one is more objective than others, but the bias blind spot is assessed in the context of self-evaluations, while hostile media bias is assessed in the context of evaluations of others.

The authors' framework has several advantages over existing theoretical explanations of biases. First, it provides a more parsimonious explanation for a wide range of biases. Second, it generates novel hypotheses that can be tested empirically. For example, the authors hypothesize that people who are more likely to believe in one bias will also be more likely to believe in other biases. Third, the framework has implications for interventions to reduce biases. For example, the authors suggest that interventions to reduce biases could focus on helping people to become more aware of their own biases and to develop strategies for resisting the tendency toward belief-consistent information processing.

Thursday, September 21, 2023

The Myth of the Secret Genius

Brian Klaas
The Garden of Forking Path
Originally posted 30 Nov 22

Here are two excepts: 

A recent research study, involving a collaboration between physicists who model complex systems and an economist, however, has revealed why billionaires are so often mediocre people masquerading as geniuses. Using computer modelling, they developed a fake society in which there is a realistic distribution of talent among competing agents in the simulation. They then applied some pretty simple rules for their model: talent helps, but luck also plays a role.

Then, they tried to see what would happen if they ran and re-ran the simulation over and over.

What did they find? The most talented people in society almost never became extremely rich. As they put it, “the most successful individuals are not the most talented ones and, on the other hand, the most talented individuals are not the most successful ones.”

Why? The answer is simple. If you’ve got a society of, say, 8 billion people, there are literally billions of humans who are in the middle distribution of talent, the largest area of the Bell curve. That means that in a world that is partly defined by random chance, or luck, the odds that someone from the middle levels of talent will end up as the richest person in the society are extremely high.

Look at this first plot, in which the researchers show capital/success (being rich) on the vertical/Y-axis, and talent on the horizontal/X-axis. What’s clear is that society’s richest person is only marginally more talented than average, and there are a lot of people who are extremely talented that are not rich.

Then, they tried to figure out why this was happening. In their simulated world, lucky and unlucky events would affect agents every so often, in a largely random pattern. When they measured the frequency of luck or misfortune for any individual in the simulation, and then plotted it against becoming rich or poor, they found a strong relationship.

(cut)

The authors conclude by stating “Our results highlight the risks of the paradigm that we call “naive meritocracy", which fails to give honors and rewards to the most competent people, because it underestimates the role of randomness among the determinants of success.”

Indeed.


Here is my summary:

The myth of the secret genius: The belief that some people are just born with natural talent and that there is nothing we can do to achieve the same level of success.

The importance of hard work: The vast majority of successful people are not geniuses. They are simply people who have worked hard and persevered in the face of setbacks.

The power of luck: Luck plays a role in everyone's success. Some people are luckier than others, and most people do not factor in luck, as well as other external variables, into their assessment.  This bias is another form of the Fundamental Attribution Error.

The importance of networks: Our networks play a big role in our success. We need to be proactive in building relationships with people who can help us achieve our goals.

Tuesday, September 19, 2023

‘Bullshit’ After All? Why People Consider Their Jobs Socially Useless

Walo, S. (2023).
Employment and Society, 0(0).

Abstract

Recent studies show that many workers consider their jobs socially useless. Thus, several explanations for this phenomenon have been proposed. David Graeber’s ‘bullshit jobs theory’, for example, claims that some jobs are in fact objectively useless, and that these are found more often in certain occupations than in others. Quantitative research on Europe, however, finds little support for Graeber’s theory and claims that alienation may be better suited to explain why people consider their jobs socially useless. This study extends previous analyses by drawing on a rich, under-utilized dataset and provides new evidence for the United States specifically. Contrary to previous studies, it thus finds robust support for Graeber’s theory on bullshit jobs. At the same time, it also confirms existing evidence on the effects of various other factors, including alienation. Work perceived as socially useless is therefore a multifaceted issue that must be addressed from different angles.

Discussion and conclusion

Using survey data from the US, this article tests Graeber’s (2018) argument that socially useless jobs are primarily found in specific occupations. Doing so, it finds that working in one of Graeber’s occupations significantly increases the probability that workers perceive their job as socially useless (compared with all others). This is true for administrative support occupations, sales occupations, business and finance occupations, and managers. Only legal occupations did not show a significant effect as predicted by Graeber’s theory. More detailed analyses even reveal that, of all 21 occupations, Graeber’s occupations are the ones that are most strongly associated with socially useless jobs when other factors are controlled for. This article is therefore the first to find quantitative evidence supporting Graeber’s argument. In addition, this article also confirms existing evidence on various other factors that can explain why people consider their jobs socially useless, including alienation, social interaction and public service motivation.

These findings may seem somewhat contradictory to the results of Soffia et al. (2022) who find that Graeber’s theory is not supported by their data. This can be explained by several differences between their study and this one. First, Soffia et al. ask people whether they ‘have the feeling of doing useful work’, while this study asks them whether they think they are making a ‘positive impact on [their] community and society’. These differently worded questions may elicit different responses. However, additional analyses show that results do not differ much between these questions (see online supplementary appendix C). Second, Soffia et al. examine data from Europe, while this study uses data from the US. This supports the notion that Graeber’s theory may only apply to heavily financialized Anglo-Saxon countries. Third, the results of Soffia et al. are based on raw distributions over occupations, while the findings presented here are mainly based on regression models that control for various other factors. If only raw distributions are analysed, however, this article also finds only limited support for Graeber’s theory.


My take for clinical psychologists:

Bullshit jobs are not just a problem for the people who do them. They also have a negative impact on society as a whole. For example, they can lead to a decline in productivity, a decrease in innovation, and an increase in inequality.

Bullshit jobs are often created by the powerful in society in order to maintain their own power and privilege. For example, managers may create bullshit jobs in order to justify their own positions or to make themselves look more important.

There is a growing awareness of the problem of bullshit jobs, and there are a number of initiatives underway to address it. For example, some organizations are now hiring "bullshit detectives" to identify and eliminate bullshit jobs.

Friday, September 15, 2023

Older Americans are more vulnerable to prior exposure effects in news evaluation.

Lyons, B. A. (2023). 
Harvard Kennedy School Misinformation Review.

Outline

Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations). I find that prior exposure effects increase with age—being strongest for those in the oldest cohort (60+)—especially for false news. I discuss implications for the design of media literacy programs and policies regarding targeted political advertising aimed at this group.

Essay Summary
  • I used three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of actual mainstream, hyperpartisan, or false political headlines. Respondents saw a sample of headlines in the first wave and all headlines in the second wave, allowing me to determine if prior exposure increases perceived accuracy differentially across age.  
  • I found that the effect of prior exposure to headlines on perceived accuracy increases with age. The effect increases linearly with age, with the strongest effect for those in the oldest age cohort (60+). These age differences were most pronounced for false news.
  • These findings suggest that repeated exposure can help account for the positive relationship between age and sharing false information online. However, the size of this effect also underscores that other factors (e.g., greater motivation to derogate the out-party) may play a larger role. 
The beginning of the Implications Section

Web-tracking and social media trace data paint a concerning portrait of older news users. Older American adults were much more likely to visit dubious news sites in 2016 and 2020 (Guess, Nyhan, et al., 2020; Moore et al., 2023), and were also more likely to be classified as false news “supersharers” on Twitter, a group who shares the vast majority of dubious news on the platform (Grinberg et al., 2019). Likewise, this age group shares about seven times more links to these domains on Facebook than younger news consumers (Guess et al., 2019; Guess et al., 2021). 

Interestingly, however, older adults appear to be no worse, if not better, at identifying false news stories than younger cohorts when asked in surveys (Brashier & Schacter, 2020). Why might older adults identify false news in surveys but fall for it “in the wild?” There are likely multiple factors at play, ranging from social changes across the lifespan (Brashier & Schacter, 2020) to changing orientations to politics (Lyons et al., 2023) to cognitive declines (e.g., in memory) (Brashier & Schacter, 2020). In this paper, I focus on one potential contributor. Specifically, I tested the notion that differential effects of prior exposure to false news helps account for the disjuncture between older Americans’ performance in survey tasks and their behavior in the wild.

A large body of literature has been dedicated to exploring the magnitude and potential boundary conditions of the illusory truth effect (Hassan & Barber, 2021; Henderson et al., 2021; Pillai & Fazio, 2021)—a phenomenon in which false statements or news headlines (De keersmaecker et al., 2020; Pennycook et al., 2018) come to be believed over multiple exposures. Might this effect increase with age? As detailed by Brashier and Schacter (2020), cognitive deficits are often blamed for older news users’ behaviors. This may be because cognitive abilities are strongest in young adulthood and slowly decline beyond that point (Salthouse, 2009), resulting in increasingly effortful cognition (Hess et al., 2016). As this process unfolds, older adults may be more likely to fall back on heuristics when judging the veracity of news items (Brashier & Marsh, 2020). Repetition, the source of the illusory truth effect, is one heuristic that may be relied upon in such a scenario. This is because repeated messages feel easier to process and thus are seen as truer than unfamiliar ones (Unkelbach et al., 2019).

Friday, September 8, 2023

He was a top church official who criticized Trump. He says Christianity is in crisis

S. Detrow, G. J. Sanchez, & S. Handel
npr.org
Originally poste 8 Aug 23

Here is an excerpt:

What's the big deal? 

According to Moore, Christianity is in crisis in the United States today.
  • Moore is now the editor-in-chief of the Christianity Today magazine and has written a new book, Losing Our Religion: An Altar Call For Evangelical America, which is his attempt at finding a path forward for the religion he loves.
  • Moore believes part of the problem is that "almost every part of American life is tribalized and factionalized," and that has extended to the church.
  • "I think if we're going to get past the blood and soil sorts of nationalism or all of the other kinds of totalizing cultural identities, it's going to require rethinking what the church is," he told NPR.
  • During his time in office, Trump embraced a Christian nationalist stance — the idea that the U.S. is a Christian country and should enforce those beliefs. In the run-up to the 2024 presidential election, Republican candidates are again vying for the influential evangelical Christian vote, demonstrating its continued influence in politics.
  • In Aug. 2022, church leaders confirmed the Department of Justice was investigating Southern Baptists following a sexual abuse crisis. In a statement, SBC leaders said: "Current leaders across the SBC have demonstrated a firm conviction to address those issues of the past and are implementing measures to ensure they are never repeated in the future."
  • In 2017, the church voted to formally "denounce and repudiate" white nationalism at its annual meeting.

What is he saying? 

Moore spoke to All Things Considered's Scott Detrow about what he thinks the path forward is for evangelicalism in America.

On why he thinks Christianity is in crisis:
It was the result of having multiple pastors tell me, essentially, the same story about quoting the Sermon on the Mount, parenthetically, in their preaching — "turn the other cheek" — [and] to have someone come up after to say, "Where did you get those liberal talking points?" And what was alarming to me is that in most of these scenarios, when the pastor would say, "I'm literally quoting Jesus Christ," the response would not be, "I apologize." The response would be, "Yes, but that doesn't work anymore. That's weak." And when we get to the point where the teachings of Jesus himself are seen as subversive to us, then we're in a crisis.

The information is here. 

Wednesday, September 6, 2023

Could a Large Language Model Be Conscious?

David Chalmers
Boston Review
Originally posted 9 Aug 23

Here are two excerpts:

Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

This gives rise to a second challenge: Should we create conscious AI? This is a major ethical challenge for the community. The question is important and the answer is far from obvious.

We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability. If conscious AI is coming somewhere down the line, then that will raise a new group of difficult ethical challenges, with the potential for new forms of injustice added on top of the old ones. One issue is that conscious AI could well lead to new harms toward humans. Another is that it could lead to new harms toward AI systems themselves.

I’m not an ethicist, and I won’t go deeply into the ethical questions here, but I don’t take them lightly. I don’t want the roadmap to conscious AI that I’m laying out here to be seen as a path that we have to go down. The challenges I’m laying out in what follows could equally be seen as a set of red flags. Each challenge we overcome gets us closer to conscious AI, for better or for worse. We need to be aware of what we’re doing and think hard about whether we should do it.

(cut)

Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.


Here are some of the key points from the article:
  1. There is no consensus on what consciousness is, so it is difficult to say definitively whether or not LLMs are conscious.
  2. Some people believe that consciousness requires carbon-based biology, but Chalmers argues that this is a form of biological chauvinism.  I agree with this completely. We can have synthetic forms of consciousness.
  3. Other people believe that LLMs are not conscious because they lack sensory processing or bodily embodiment. Chalmers argues that these objections are not decisive, but they do raise important questions about the nature of consciousness.
  4. Chalmers concludes by suggesting that we should take the possibility of LLM consciousness seriously, but that we should also be cautious about making definitive claims about it.

Wednesday, August 30, 2023

Not all skepticism is “healthy” skepticism: Theorizing accuracy- and identity-motivated skepticism toward social media misinformation

Li, J. (2023). 
New Media & Society, 0(0). 

Abstract

Fostering skepticism has been seen as key to addressing misinformation on social media. This article reveals that not all skepticism is “healthy” skepticism by theorizing, measuring, and testing the effects of two types of skepticism toward social media misinformation: accuracy- and identity-motivated skepticism. A two-wave panel survey experiment shows that when people’s skepticism toward social media misinformation is driven by accuracy motivations, they are less likely to believe in congruent misinformation later encountered. They also consume more mainstream media, which in turn reinforces accuracy-motivated skepticism. In contrast, when skepticism toward social media misinformation is driven by identity motivations, people not only fall for congruent misinformation later encountered, but also disregard platform interventions that flag a post as false. Moreover, they are more likely to see social media misinformation as favoring opponents and intentionally avoid news on social media, both of which form a vicious cycle of fueling more identity-motivated skepticism.

Discussion

I have made the case that it is important to distinguish between accuracy-motivated skepticism and identity-motivated skepticism. They are empirically distinguishable constructs that cast opposing effects on outcomes important for a well-functioning democracy. Across the board, accuracy-motivated skepticism produces normatively desirable outcomes. Holding a higher level of accuracy-motivated skepticism makes people less likely to believe in congruent misinformation they encounter later, offering hope that partisan motivated reasoning can be attenuated. Accuracy-motivated skepticism toward social media misinformation also has a mutually reinforcing relationship with consuming news from mainstream media, which can serve to verify information on social media and produce potential learning effects.

In contrast, not all skepticism is “healthy” skepticism. Holding a higher level of identity-motivated skepticism not only increases people’s susceptibility to congruent misinformation they encounter later, but also renders content flagging by social media platforms less effective. This is worrisome as calls for skepticism and platform content moderation have been a crucial part of recently proposed solutions to misinformation. Further, identity-motivated skepticism reinforces perceived bias of misinformation and intentional avoidance of news on social media. These can form a vicious cycle of close-mindedness and politicization of misinformation.

This article advances previous understanding of skepticism by showing that beyond the amount of questioning (the tipping point between skepticism and cynicism), the type of underlying motivation matters for whether skepticism helps people become more informed. By bringing motivated reasoning and media skepticism into the same theoretical space, this article helps us make sense of the contradictory evidence on the utility of media skepticism. Skepticism in general should not be assumed to be “healthy” for democracy. When driven by identity motivations, skepticism toward social media misinformation is counterproductive for political learning; only when skepticism toward social media is driven by the accuracy motivations does it inoculate people against favorable falsehoods and encourage consumption of credible alternatives.


Here are some additional thoughts on the research:
  • The distinction between accuracy-motivated skepticism and identity-motivated skepticism is a useful one. It helps to explain why some people are more likely to believe in misinformation than others.
  • The findings of the studies suggest that interventions that promote accuracy-motivated skepticism could be effective in reducing the spread of misinformation on social media.
  • It is important to note that the research was conducted in the United States. It is possible that the findings would be different in other countries.

Wednesday, August 23, 2023

Excess Death Rates for Republican and Democratic Registered Voters in Florida and Ohio During the COVID-19 Pandemic

Wallace J, Goldsmith-Pinkham P, Schwartz JL. 
JAMA Intern Med. 
Published online July 24, 2023.
doi:10.1001/jamainternmed.2023.1154

Key Points

Question

Was political party affiliation a risk factor associated with excess mortality during the COVID-19 pandemic in Florida and Ohio?

Findings

In this cohort study evaluating 538 159 deaths in individuals aged 25 years and older in Florida and Ohio between March 2020 and December 2021, excess mortality was significantly higher for Republican voters than Democratic voters after COVID-19 vaccines were available to all adults, but not before. These differences were concentrated in counties with lower vaccination rates, and primarily noted in voters residing in Ohio.

Meaning

The differences in excess mortality by political party affiliation after COVID-19 vaccines were available to all adults suggest that differences in vaccination attitudes and reported uptake between Republican and Democratic voters may have been a factor in the severity and trajectory of the pandemic in the US.


My Take

Beliefs are a powerful force that can influence our health behaviors. Our beliefs about health, illness, and the causes of disease can shape our decisions about what we eat, how much we exercise, and whether or not we see a doctor when we're sick.

There is a growing body of research that suggests that beliefs can have a significant impact on health outcomes. For example, one study found that people who believe that they have a strong sense of purpose in life tend to live longer than those who do not. Another study found that people who believe in a higher power tend to be more optimistic and have a more positive outlook on life, which can lead to better mental health, which can in turn have a positive impact on physical health.  However, certain beliefs may be harmful to health and longevity.

The study suggest that beliefs may play a role in the relationship between political party affiliation and excess death rates. For example, Republicans are more likely to hold beliefs that are associated with vaccine hesitancy, such as distrust of government and the medical establishment. These beliefs may have contributed to the lower vaccination rates among Republican-registered voters, which in turn may have led to higher excess death rates.

Saturday, August 12, 2023

Teleological thinking is driven by aberrant associations

Corlett, P. R. (2023, June 17).
PsyArXiv preprints
https://doi.org/10.31234/osf.io/wgyqs

Abstract

Teleological thought — the tendency to ascribe purpose to objects and events — is useful in some cases (encouraging explanation-seeking), but harmful in others (fueling delusions and conspiracy theories). What drives maladaptive teleological thinking? A fundamental distinction in how we learn causal relationships between events is whether it can be best explained via associations versus via propositional thought. Here, we propose that directly contrasting the contributions of these two pathways can elucidate where teleological thinking goes wrong. We modified a causal learning task such that we could encourage one pathway over another in different instances. Across experiments (total N=600), teleological tendencies were correlated with delusion-like ideas and uniquely explained by aberrant associative learning, but not by learning via propositional rules. Computational modeling suggested that the relationship between associative learning and teleological thinking can be explained by spurious prediction errors that imbue random events with more significance — providing a new understanding for how humans make meaning of lived events.

From the Discussion section

Teleological thinking, in previous work, has been defined in terms of “beliefs”, “social-cognitive biases”, and indeed carries “reasoning” in its very name (as it is used interchangeably with teleological or ‘purpose-based’ reasoning)—which is why it might be surprising to learn of the relationship between teleological thinking and low-level associative learning, and not learning via propositional reasoning.  The key result across experiments can be summarized as such: aberrant prediction errors augured weaker non-additive blocking, which predicted tendencies to engage in teleological thinking, which was consistently correlated with distress from delusional thinking.  This pattern of results was demonstrated in both behavioral and computational modeling data, and withstood even more conservative regression models, 

accounting for the variance explained by other variables.  In other words, the same people who learn more from irrelevant cues or overpredict relationships in the non-additive blocking task (by predicting that cues [that should have been“blocked”] might also cause allergic reactions) tend to also ascribe more purpose to random events —and to experience more distress from delusional beliefs (and thus hold their delusional beliefs in a more patient-like way).


Some thoughts:

The saying "Life is a projective test" suggests that we all see the world through our own unique lens, shaped by our experiences, beliefs, and values. This lens (read as biases) can cause us to make aberrant associations, or to see patterns and connections that are not actually there.

The authors of the paper found that people who are more likely to engage in teleological thinking are also more likely to make aberrant associations. This suggests that our tendency to see the world in a teleological way may be driven by our own biases and assumptions.

In other words, the way we see the world is not always accurate or objective. It is shaped by our own personal experiences and perspectives. This can lead us to make mistakes, or to see things that are not really there.

The next time you are trying to make sense of something, it is important to be aware of your own biases and assumptions, which may help make better choices. 

Friday, August 11, 2023

How and why people want to be more moral

Sun, J., Wilt, J. A., Meindl, et al. (2023).
Journal of Personality.
https://doi.org/10.1111/jopy.12812

Abstract

Objective

What types of moral improvements do people wish to make? Do they hope to become more good, or less bad? Do they wish to be more caring? More honest? More loyal? And why exactly do they want to become more moral? Presumably, most people want to improve their morality because this would benefit others, but is this in fact their primary motivation? Here, we begin to investigate these questions.

Method

Across two large, preregistered studies (N = 1818), participants provided open-ended descriptions of one change they could make in order to become more moral; they then reported their beliefs about and motives for this change.

Results

In both studies, people most frequently expressed desires to improve their compassion and more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being.

Conclusions

Together, these studies provide rich descriptive insights into how ordinary people want to be more moral, and show that they are particularly motivated to do so for their own sake.


My summary:
  • People most frequently expressed desires to improve their compassion. This suggests that people are motivated to become more moral in order to be more caring and helpful to others.
  • People more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. This suggests that people are motivated to become more moral by doing more good things, rather than by simply avoiding doing bad things.
  • The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being. This suggests that people are motivated to become more moral for their own sake, as well as for the sake of others.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).

Abstract

Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.

Summary

Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Friday, July 21, 2023

Belief in Five Spiritual Entities Edges Down to New Lows

Megan Brenan
news.gallup.com
Originally posted 20 July 23

The percentages of Americans who believe in each of five religious entities -- God, angels, heaven, hell and the devil -- have edged downward by three to five percentage points since 2016. Still, majorities believe in each, ranging from a high of 74% believing in God to lows of 59% for hell and 58% for the devil. About two-thirds each believe in angels (69%) and heaven (67%).

Gallup has used this framework to measure belief in these spiritual entities five times since 2001, and the May 1-24, 2023, poll finds that each is at its lowest point. Compared with 2001, belief in God and heaven is down the most (16 points each), while belief in hell has fallen 12 points, and the devil and angels are down 10 points each.

This question asks respondents whether they believe in each concept or if they are unsure, and from 13% to 15% currently say they are not sure. At the same time, nearly three in 10 U.S. adults do not believe in the devil or hell, while almost two in 10 do not believe in angels and heaven, and 12% say they do not believe in God.

As the percentage of believers has dropped over the past two decades, the corresponding increases have occurred mostly in nonbelief, with much smaller increases in uncertainty. This is true for all but belief in God, which has seen nearly equal increases in uncertainty and nonbelief.

In the current poll, about half of Americans, 51%, believe in all five spiritual entities, while 11% do not believe in any of them. Another 7% are not sure about all of them, while the rest (31%) believe in some and not others.

Gallup periodically measures Americans’ belief in God with different question wordings, producing slightly different results. While the majority of U.S. adults say they believe in God regardless of the question wording, when not offered the option to say they are unsure, significantly more (81% in a survey conducted last year) said they believe in God.



My take: Despite the decline in belief, majorities of Americans still believe in each of the five spiritual entities. This suggests that religion remains an important part of American culture, even as the country becomes more secularized.

Friday, July 14, 2023

The illusion of moral decline

Mastroianni, A.M., Gilbert, D.T.
Nature (2023).

Abstract

Anecdotal evidence indicates that people believe that morality is declining. In a series of studies using both archival and original data (n = 12,492,983), we show that people in at least 60 nations around the world believe that morality is declining, that they have believed this for at least 70 years and that they attribute this decline both to the decreasing morality of individuals as they age and to the decreasing morality of successive generations. Next, we show that people’s reports of the morality of their contemporaries have not declined over time, suggesting that the perception of moral decline is an illusion. Finally, we show how a simple mechanism based on two well-established psychological phenomena (biased exposure to information and biased memory for information) can produce an illusion of moral decline, and we report studies that confirm two of its predictions about the circumstances under which the perception of moral decline is attenuated, eliminated or reversed (that is, when respondents are asked about the morality of people they know well or people who lived before the respondent was born). Together, our studies show that the perception of moral decline is pervasive, perdurable, unfounded and easily produced. This illusion has implications for research on the misallocation of scarce resources, the underuse of social support and social influence.

Discussion

Participants in the foregoing studies believed that morality has declined, and they believed this in every decade and in every nation we studied. They believed the decline began somewhere around the time they were born, regardless of when that was, and they believed it continues to this day. They believed the decline was a result both of individuals becoming less moral as they move through time and of the replacement of more moral people by less moral people. And they believed that the people they personally know and the people who lived before they did are exceptions to this rule. About all these things, they were almost certainly mistaken. One reason they may have held these mistaken beliefs is that they may typically have encountered more negative than positive information about the morality of contemporaries whom they did not personally know, and the negative information may have faded more quickly from memory or lost its emotional impact more quickly than the positive information did, leading them to believe that people today are not as kind, nice, honest or good as once upon a time they were.

Here are some important points:
  • There are a number of reasons why people might believe that morality is declining. One reason is that people tend to focus on negative news stories, which can give the impression that the world is a more dangerous and immoral place than it actually is. Another reason is that people tend to remember negative events more vividly than positive events, which can also lead to the impression that morality is declining.
  • Despite the widespread belief in moral decline, there is no evidence to suggest that morality is actually getting worse. In fact, there is evidence to suggest that morality has been improving over time. For example, crime rates have been declining for decades, and people are more likely to volunteer and donate to charity than they were in the past.
  • The illusion of moral decline can have a number of negative consequences. It can lead to cynicism, apathy, and a sense of hopelessness. It can also make it more difficult to solve social problems, because people may believe that the problem is too big or too complex to be solved.

Tuesday, July 11, 2023

Conspirituality: How New Age conspiracy theories threaten public health

D. Beres, M. Remski, & J. Walker
bigthink.com
Originally posted 17 June 23

Here is an excerpt:

Disaster capitalism and disaster spirituality rely, respectively, on an endless supply of items to commodify and minds to recruit. While both roar into high gear in times of widespread precarity and vulnerability, in disaster spirituality there is arguably more at stake on the supply side. Hedge fund managers can buy up distressed properties in post-Katrina New Orleans to gentrify and flip. They have cash on hand to pull from when opportunity strikes, whereas most spiritual figures have to use other means for acquisitions and recruitment during times of distress.

Most of the influencers operating in today’s conspirituality landscape stand outside of mainstream economies and institutional support. They’ve been developing fringe religious ideas and making money however they can, usually up against high customer turnover.

For the mega-rich disaster capitalist, a hurricane or civil war is a windfall. But for the skint disaster spiritualist, a public catastrophe like 9/11 or COVID-19 is a life raft. Many have no choice but to climb aboard and ride. Additionally, if your spiritual group has been claiming for years to have the answers to life’s most desperate problems, the disaster is an irresistible dare, a chance to make good on divine promises. If the spiritual group has been selling health ideologies or products they guarantee will ensure perfect health, how can they turn away from the opportunity presented by a pandemic?


Here is my summary with some extras:

The article argues that conspirituality is a growing problem that is threatening public health. Conspiritualists push the false beliefs that vaccines are harmful, that the COVID-19 pandemic is a hoax, and that natural immunity is the best way to protect oneself from disease. These beliefs can lead people to make decisions that put their health and the health of others at risk.

The article also argues that conspirituality is often spread through social media platforms, which can make it difficult to verify the accuracy of information. This can lead people to believe false or misleading information, which can have serious consequences for their health.  However, some individuals can make a profit from the spread of disinformation.

The article concludes by calling for more research on conspirituality and its impact on public health. It also calls for public health professionals to be more aware of conspirituality and to develop strategies to address it.
  • Conspirituality is a term that combines "conspiracy" and "spirituality." It refers to the belief that certain anti-science ideas (such as alternative medicine, non-scientific interventions, and spiritual healing) are being suppressed by a powerful elite. Conspiritualists often believe that this elite is responsible for a wide range of problems, including the COVID-19 pandemic.
  • The term "conspirituality" was coined by sociologists Charlotte Ward and David Voas in 2011. They argued that conspirituality is a unique form of conspiracy theory that is characterized by blending 1) New Age beliefs (religious and spiritual ideas) of a paradigm shift in consciousness (in which we will all be awakened to a new reality); and, 2) traditional conspiracy theories (in which an elite, powerful, and covert group of individuals are either controlling or trying to control the social and political order.)

Saturday, July 8, 2023

Microsoft Scraps Entire Ethical AI Team Amid AI Boom

Lauren Leffer
gizmodo.com
Updated on March 14, 2023
Still relevant

Microsoft is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can. And starting this month, the company will be continuing on its AI rampage without a team dedicated to internally ensuring those AI features meet Microsoft’s ethical standards, according to a Monday night report from Platformer.

Microsoft has scrapped its whole Ethics and Society team within the company’s AI sector, as part of ongoing layoffs set to impact 10,000 total employees, per Platformer. The company maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making. But the ethics and society taskforce, which bridged the gap between policy and products, is reportedly no more.

Gizmodo reached out to Microsoft to confirm the news. In response, a company spokesperson sent the following statement:
Microsoft remains committed to developing and designing AI products and experiences safely and responsibly. As the technology has evolved and strengthened, so has our investment, which at times has meant adjusting team structures to be more effective. For example, over the past six years we have increased the number of people within our product teams who are dedicated to ensuring we adhere to our AI principles. We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers.

To Platformer, the company reportedly previously shared this slightly different version of the same statement:

Microsoft is committed to developing AI products and experiences safely and responsibly...Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice...We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.

Note that, in this older version, Microsoft does inadvertently confirm that the ethics and society team is no more. The company also previously specified staffing increases were in the Office of Responsible AI vs people generally “dedicated to ensuring we adhere to our AI principles.”

Yet, despite Microsoft’s reassurances, former employees told Platformer that the Ethics and Society team played a key role translating big ideas from the responsibility office into actionable changes at the product development level.

The info is here.

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 

Abstract

People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.