Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Saturday, June 19, 2021

Preparing for the Next Generation of Ethical Challenges Concerning Heritable Human Genome Editing

Robert Klitzman
The American Journal of Bioethics
(2021) Volume 21 (6), 1-4.

Here is the conclusion

Moving Forward

Policymakers will thus need to make complex and nuanced risk/benefit calculations regarding costs and extents of treatments, ages of onset, severity of symptoms, degrees of genetic penetrance, disease prevalence, future scientific benefits, research costs, appropriate allocations of limited resources, and questions of who should pay.

Future efforts should thus consider examining scientific and ethical challenges in closer conjunction, not separated off, and bring together the respective strengths of the Commission’s and of the WHO Committee’s approaches. The WHO Committee includes broader stakeholders, but does not yet appear to have drawn conclusions regarding such specific medical and scientific scenarios (WHO 2020). These two groups’ respective memberships also differ in instructive ways that can mutually inform future deliberations. Among the Commission’s 18 chairs and members, only two appear to work primarily in ethics or policy; the majority are scientists (National Academy of Medicine, the National Academies of Sciences and the Royal Society 2020). In contrast, the WHO Committee includes two chairs and 16 members, with both chairs and the majority of members working primarily in ethics, policy or law (WHO 2020). ASRM and other countries’ relevant professional organizations should also stipulate that physicians and healthcare professionals should not be involved in any way in the care of patients using germline editing abroad.

The Commission’s Report thus provides valuable insights and guidelines, but multiple stakeholders will likely soon confront additional, complex dilemmas involving interplays of both science and ethics that also need urgent attention.

Sunday, June 13, 2021

Philosophy in Science: Can philosophers of science permeate through science and produce scientific knowledge?

Pradeu, T., et al. (2021)
Preprint
British Journal of the Philosophy of Science

Abstract

Most philosophers of science do philosophy ‘on’ science. By contrast, others do philosophy ‘in’ science (‘PinS’), i.e., they use philosophical tools to address scientific problems and to provide scientifically useful proposals. Here, we consider the evidence in favour of a trend of this nature. We proceed in two stages. First, we identify relevant authors and articles empirically with bibliometric tools, given that PinS would be likely to infiltrate science and thus to be published in scientific journals (‘intervention’), cited in scientific journals (‘visibility’) and sometimes recognized as a scientific result by scientists (‘contribution’). We show that many central figures in philosophy of science have been involved in PinS, and that some philosophers have even ‘specialized’ in this practice. Second, we propose a conceptual definition of PinS as a process involving three conditions (raising a scientific problem, using philosophical tools to address it, and making a scientific proposal), and we ask whether the articles identified at the first stage fulfil all these conditions. We show that PinS is a distinctive, quantitatively substantial trend within philosophy of science, demonstrating the existence of a methodological continuity from science to philosophy of science.

From the Conclusion

A crucial and long-standing question for philosophers of science is how philosophy of science relates to science, including, in particular, its possible impact on science. Various important ways in which philosophy of science can have an impact on science have been documented in the past, from the influence of Mach, Poincaré and Schopenhauer on the development of the theory of relativity (Rovelli [2018]) to Popper’s long-recognized influence on scientists, such as Eccles and Medawar, and some recent reflections on how best to organize science institutionally (e.g. Leonelli [2017]). Here, we identify and describe an
approach that we propose to call ‘PinS’, which adds another, in our view essential, layer to this picture.

By combining quantitative and qualitative tools, we demonstrate the existence of a corpus of articles by philosophers of science, either published in philosophy of science journals or in scientific journals, raising scientific problems and aiming to contribute to their resolution via the use of philosophical tools. PinS constitutes a subdomain of philosophy of science, which has a long history, with canonical texts and authors, but, to our knowledge, this is the first time this domain is delineated and analysed.

Monday, June 7, 2021

Science Skepticism Across 24 Countries

Rutjens, B. T., et al., (2021). 
Social Psychological and Personality Science. 
https://doi.org/10.1177/19485506211001329

Abstract

Efforts to understand and remedy the rejection of science are impeded by lack of insight into how it varies in degree and in kind around the world. The current work investigates science skepticism in 24 countries (N = 5,973). Results show that while some countries stand out as generally high or low in skepticism, predictors of science skepticism are relatively similar across countries. One notable effect was consistent across countries though stronger in Western, Educated, Industrialized, Rich, and Democratic (WEIRD) nations: General faith in science was predicted by spirituality, suggesting that it, more than religiosity, may be the ‘enemy’ of science acceptance. Climate change skepticism was mainly associated with political conservatism especially in North America. Other findings were observed across WEIRD and non-WEIRD nations: Vaccine skepticism was associated with spirituality and scientific literacy, genetic modification skepticism with scientific literacy, and evolution skepticism with religious orthodoxy. Levels of science skepticism are heterogeneous across countries, but predictors of science skepticism are heterogeneous across domains.

From the Discussion

Indeed, confirming previous results obtained in the Netherlands (Rutjens & van der Lee, 2020)—and providing strong support for Hypothesis 6—the current data speak to the crucial role of spirituality in fostering low faith in science, more generally, beyond its domain-specific effects on vaccine skepticism. This indicates that the negative impact of spirituality on faith in science represents a cross-national phenomenon that is more generalizable than might be expected based on the large variety (Muthukrishna et al., 2020) of countries included here. A possible explanation for the robustness of this effect may lie in the inherent irreconcilability of the intuitive epistemology of a spiritual belief system with science (Rutjens & van der Lee, 2020). (If so, then we might look at a potentially much larger problem that extends beyond spirituality and applies more generally to “post-truth” society, in which truth and perceptions of reality may be based on feelings rather than facts; Martel et al., 2020; Rutjens & Brandt, 2018.) However, these results do not mean that traditional religiosity as a predictor of science skepticism (McPhetres & Zuckermann, 2018; Rutjens, Heine, et al., 2018; Rutjens, Sutton, & van der Lee, 2018) has now become irrelevant: Not only did religious orthodoxy significantly contribute to low faith in science, it was also found to be a very consistent cross-national predictor of evolution skepticism (but not of other forms of science skepticism included in the study).

Thursday, June 3, 2021

Scientific panel loosens ’14-day rule’ limiting how long human embryos can be grown in the lab

Andrew Joseph
STATnews.com
Originally posted 26 May 2021

An influential scientific panel cracked open the door on Wednesday to growing human embryos in the lab for longer periods of time than currently allowed, a step that could enable the plumbing of developmental mysteries but that also raises thorny questions about whether research that can be pursued should be.

For decades, scientists around the world have followed the “14-day rule,” which stipulates that they should let human embryos develop in the lab for only up to two weeks after fertilization. The rule — which some countries (though not the United States) have codified into law — was meant to allow researchers to conduct inquiries into the early days of embryonic development, but not without limits. And for years, researchers didn’t push that boundary, not just for legal and ethical reasons, but for technical ones as well: They couldn’t keep the embryos growing in lab dishes that long.

More recently, however, scientists have refined their cell-culture techniques, finding ways to sustain embryos up to that deadline. Those advances — along with other leaps in the world of stem cell research, with scientists now transmogrifying cells into blobs that resemble early embryos or injecting human cells into animals — have complicated ethical debates about how far biomedical research should go in its quest for knowledge and potential treatments.

Now, in the latest updates to its guidelines, the International Society for Stem Cell Research has revised its view on studies that would take human embryos beyond 14 days, moving such experiments from the “absolutely not” category to a “maybe” — but only if lots of conditions are first met.

“We’ve relaxed the guidelines in that respect, we haven’t abandoned them,” developmental biologist Robin Lovell-Badge of the Francis Crick Institute, who chaired the ISSCR’s guidelines task force, said at a press briefing.

Wednesday, May 26, 2021

Before You Answer, Consider the Opposite Possibility—How Productive Disagreements Lead to Better Outcomes

Ian Leslie
The Atlantic
Originally published 25 Apr 21

Here is an excerpt:

This raises the question of how a wise inner crowd can be cultivated. Psychologists have investigated various methods. One, following Stroop, is to harness the power of forgetting. Reassuringly for those of us who are prone to forgetting, people with poor working memories have been shown to have a wiser inner crowd; their guesses are more independent of one another, so they end up with a more diverse set of estimates and a more accurate average. The same effect has been achieved by spacing the guesses out in time.

More sophisticated methods harness the mind’s ability to inhabit different perspectives and look at a problem from more than one angle. People generate more diverse estimates when prompted to base their second or third guess on alternative assumptions; one effective technique is simply asking people to “consider the opposite” before giving a new answer. A fascinating recent study in this vein harnesses the power of disagreement itself. A pair of Dutch psychologists, Philippe Van de Calseyde and Emir Efendić, asked people a series of questions with numerical answers, such as the percentage of the world’s airports located in the U.S.. Then they asked participants to think of someone in their life with whom they often disagreed—that uncle with whom they always argue about politics—and to imagine what that person would guess.

The respondents came up with second estimates that were strikingly different from their first estimate, producing a much more accurate inner crowd. The same didn’t apply when they were asked to imagine how someone they usually agree with would answer the question, which suggests that the secret is to incorporate the perspectives of people who think differently from us. That the respondents hadn’t discussed that particular question with their disagreeable uncle did not matter. Just the act of thinking about someone with whom they argued a lot was enough to jog them out of habitual assumptions.

Monday, May 10, 2021

Do Brain Implants Change Your Identity?

Christine Kenneally
The New Yorker
Originally posted 19 Apr 21

Here are two excerpts:

Today, at least two hundred thousand people worldwide, suffering from a wide range of conditions, live with a neural implant of some kind. In recent years, Mark Zuckerberg, Elon Musk, and Bryan Johnson, the founder of the payment-processing company Braintree, all announced neurotechnology projects for restoring or even enhancing human abilities. As we enter this new era of extra-human intelligence, it’s becoming apparent that many people develop an intense relationship with their device, often with profound effects on their sense of identity. These effects, though still little studied, are emerging as crucial to a treatment’s success.

The human brain is a small electrical device of super-galactic complexity. It contains an estimated hundred billion neurons, with many more links between them than there are stars in the Milky Way. Each neuron works by passing an electrical charge along its length, causing neurotransmitters to leap to the next neuron, which ignites in turn, usually in concert with many thousands of others. Somehow, human intelligence emerges from this constant, thrilling choreography. How it happens remains an almost total mystery, but it has become clear that neural technologies will be able to synch with the brain only if they learn the steps of this dance.

(cut)

For the great majority of patients, deep-brain stimulation was beneficial and life-changing, but there were occasional reports of strange behavioral reactions, such as hypomania and hypersexuality. Then, in 2006, a French team published a study about the unexpected consequences of otherwise successful implantations. Two years after a brain implant, sixty-five per cent of patients had a breakdown in their marriages or relationships, and sixty-four per cent wanted to leave their careers. Their intellect and their levels of anxiety and depression were the same as before, or, in the case of anxiety, had even improved, but they seemed to experience a fundamental estrangement from themselves. One felt like an electronic doll. Another said he felt like RoboCop, under remote control.

Gilbert describes himself as “an applied eliminativist.” He doesn’t believe in a soul, or a mind, at least as we normally think of them, and he strongly questions whether there is a thing you could call a self. He suspected that people whose marriages broke down had built their identities and their relationships around their pathologies. When those were removed, the relationships no longer worked. Gilbert began to interview patients. He used standardized questionnaires, a procedure that is methodologically vital for making dependable comparisons, but soon he came to feel that something about this unprecedented human experience was lost when individual stories were left out. The effects he was studying were inextricable from his subjects’ identities, even though those identities changed.

Many people reported that the person they were after treatment was entirely different from the one they’d been when they had only dreamed of relief from their symptoms. Some experienced an uncharacteristic buoyancy and confidence. One woman felt fifteen years younger and tried to lift a pool table, rupturing a disk in her back. One man noticed that his newfound confidence was making life hard for his wife; he was too “full-on.” Another woman became impulsive, walking ten kilometres to a psychologist’s appointment nine days after her surgery. She was unrecognizable to her family. They told her that they grieved for the old her.

Wednesday, May 5, 2021

Top German psychologist found to have fabricated data—University Investigation Finds Anxiety Expert Pressured Whistleblowers

Hristio Boytchev
Science  09 Apr 2021:
Vol. 372, Issue 6538, pp. 117-118
DOI: 10.1126/science.372.6538.117

Here is an excerpt:

Wittchen was one of the top epidemiologists of psychiatry, and TU Dresden “has benefited greatly from him,” says Jürgen Margraf, a psychologist at Ruhr University, Bochum, who has collaborated with Wittchen. “If the commission’s findings turn out to be true, they are very disturbing for the entire field, and that would also have an impact on TU Dresden.” Thomas Pollmächer, director of the mental health center at Ingolstadt Hospital, says the allegations are “startling.” He worries about other possible irregularities in Wittchen’s extensive publication record. “Some time bombs may be ticking,” he says.

The study in question was a €2.4 million survey of staffing levels and quality at nearly 100 German psychiatric facilities. Working for TU Dresden’s Association for Knowledge and Technology Transfer (GWT), Wittchen was the principal investigator of the effort, which aimed to examine workloads at the clinics and inform government regulations.

But in February 2019, German media reported allegations, stemming from whistle-blowers close to the survey project, that study data had been fabricated. The university launched a formal investigation, led by law professor Hans-Heinrich Trute.

After 2 years of work, the commission, in its final report, has found that only 73 of 93 psychiatric clinics were actually surveyed. For the others, the report says, Wittchen instructed researchers to copy data from one clinic and apply them to another.

 “The violations were intentional, not negligent,” the report says. “Wittchen wanted to appear more successful than he was.”

Wittchen told Science he would not answer detailed questions “because they are the issue of legal proceedings.” But he denies any wrongdoing and says the study in question was “scientifically correct.”

The investigation report also shows how Wittchen sought to avoid repercussions. 

In April 2019, he sent an email to Hans Müller-Steinhagen, president of TU Dresden at the time, warning him to “stay out of the project” and stop the investigation, because otherwise there would be a “national political earthquake.” 

Sunday, May 2, 2021

The Quest to Tell Science from Pseudoscience

Michael D. Gordin
Boston Review
Originally published 23 Mar 21

Here is an excerpt:

Two incidents sparked a reevaluation. The first was the Soviet Union’s launch of the first artificial satellite, Sputnik, on October 4, 1957. The success triggered an extensive discussion about whether the United States had fallen behind in science education, and reform proposals were mooted for many different areas. Then the centenary of the publication of Darwin’s On the Origin of Species (1859) prompted biologists to decry that “one hundred years without Darwinism are enough!” The Biological Sciences Curriculum Study, an educational center funded by a grant from the National Science Foundation, recommended an overhaul of secondary school education in the life sciences, with Darwinism (and human evolution) given a central place.

The cease-fire between the evolutionists and Christian fundamentalists had been broken. In the 1960s religious groups countered with a series of laws insisting on “equal time”: if Darwinism (or “evolution science”) was required, then it should be balanced with an equivalent theory, “creation science.” Cases from both Arkansas and Louisiana made it to the appellate courts in the early 1980s. The first, McLean v. Arkansas Board of Education, saw a host of expert witnesses spar over whether Darwinism was science, whether creation science also met the definition of science, and the limits of the Constitution’s establishment clause. A crucial witness for the evolutionists was Michael Ruse, a philosopher of science at the University of Guelph in Ontario. Ruse testified to several different demarcation criteria and contended that accounts of the origins of humanity based on Genesis could not satisfy them. One of the criteria he floated was Popper’s.

Judge William Overton, in his final decision in January 1982, cited Ruse’s testimony when he argued that falsifiability was a standard for determining whether a doctrine was science—and that scientific creationism did not meet it. (Ruse walked his testimony back a decade later.) Overton’s appellate court decision was expanded by the U.S. Supreme Court in Edwards v. Aguillard (1987), the Louisiana case; the result was that Popper’s falsifiability was incorporated as a demarcation criterion in a slew of high school biology texts. No matter that the standard was recognized as bad philosophy; as a matter of legal doctrine it was enshrined. (In his 2005 appellate court decision in Kitzmiller v. Dover Area School District, Judge John E. Jones III modified the legal demarcation standards by eschewing Popper and promoting several less sharp but more apposite criteria while deliberating over the teaching of a doctrine known as “intelligent design,” a successor of creationism crafted to evade the precedent of Edwards.)

Sunday, April 18, 2021

The Antiscience Movement Is Escalating, Going Global and Killing Thousands

Peter J. Hotez
Scientific American
Originally posted 29 MAR 21

Antiscience has emerged as a dominant and highly lethal force, and one that threatens global security, as much as do terrorism and nuclear proliferation. We must mount a counteroffensive and build new infrastructure to combat antiscience, just as we have for these other more widely recognized and established threats.

Antiscience is the rejection of mainstream scientific views and methods or their replacement with unproven or deliberately misleading theories, often for nefarious and political gains. It targets prominent scientists and attempts to discredit them. The destructive potential of antiscience was fully realized in the U.S.S.R. under Joseph Stalin. Millions of Russian peasants died from starvation and famine during the 1930s and 1940s because Stalin embraced the pseudoscientific views of Trofim Lysenko that promoted catastrophic wheat and other harvest failures. Soviet scientists who did not share Lysenko’s “vernalization” theories lost their positions or, like the plant geneticist, Nikolai Vavilov, starved to death in a gulag.

Now antiscience is causing mass deaths once again in this COVID-19 pandemic. Beginning in the spring of 2020, the Trump White House launched a coordinated disinformation campaign that dismissed the severity of the epidemic in the United States, attributed COVID deaths to other causes, claimed hospital admissions were due to a catch-up in elective surgeries, and asserted that ultimately that the epidemic would spontaneously evaporate. It also promoted hydroxychloroquine as a spectacular cure, while downplaying the importance of masks. Other authoritarian or populist regimes in Brazil, Mexico, Nicaragua, Philippines and Tanzania adopted some or all of these elements.   

As both a vaccine scientist and a parent of an adult daughter with autism and intellectual disabilities, I have years of experience going up against the antivaccine lobby, which claims vaccines cause autism or other chronic conditions. This prepared me to quickly recognize the outrageous claims made by members of the Trump White House staff, and to connect the dots to label them as antiscience disinformation. Despite my best efforts to sound the alarm and call it out, the antiscience disinformation created mass havoc in the red states. 

Monday, March 22, 2021

The Mistrust of Science

Atul Gawande
The New Yorker
Originally posted 01 June 2016

Here is an excerpt:

The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).

Vaccine fears, for example, have persisted despite decades of research showing them to be unfounded. Some twenty-five years ago, a statistical analysis suggested a possible association between autism and thimerosal, a preservative used in vaccines to prevent bacterial contamination. The analysis turned out to be flawed, but fears took hold. Scientists then carried out hundreds of studies, and found no link. Still, fears persisted. Countries removed the preservative but experienced no reduction in autism—yet fears grew. A British study claimed a connection between the onset of autism in eight children and the timing of their vaccinations for measles, mumps, and rubella. That paper was retracted due to findings of fraud: the lead author had falsified and misrepresented the data on the children. Repeated efforts to confirm the findings were unsuccessful. Nonetheless, vaccine rates plunged, leading to outbreaks of measles and mumps that, last year, sickened tens of thousands of children across the U.S., Canada, and Europe, and resulted in deaths.

People are prone to resist scientific claims when they clash with intuitive beliefs. They don’t see measles or mumps around anymore. They do see children with autism. And they see a mom who says, “My child was perfectly fine until he got a vaccine and became autistic.”

Now, you can tell them that correlation is not causation. You can say that children get a vaccine every two to three months for the first couple years of their life, so the onset of any illness is bound to follow vaccination for many kids. You can say that the science shows no connection. But once an idea has got embedded and become widespread, it becomes very difficult to dig it out of people’s brains—especially when they do not trust scientific authorities. And we are experiencing a significant decline in trust in scientific authorities.


5 years old, and still relevant.

Friday, February 12, 2021

Measuring Implicit Intergroup Biases.

Lai, C. K., & Wilson, M. 
(2020, December 9).

Abstract

Implicit intergroup biases are automatically activated prejudices and stereotypes that may influence judgments of others on the basis of group membership. We review evidence on the measurement of implicit intergroup biases, finding: implicit intergroup biases reflect the personal and the cultural, implicit measures vary in reliability and validity, and implicit measures vary greatly in their prediction of explicit and behavioral outcomes due to theoretical and methodological moderators. We then discuss three challenges to the application of implicit intergroup biases to real‐world problems: (1) a lack of research on social groups of scientific and public interest, (2) developing implicit measures with diagnostic capabilities, and (3) resolving ongoing ambiguities in the relationship between implicit bias and behavior. Making progress on these issues will clarify the role of implicit intergroup biases in perpetuating inequality.

(cut)

Predictive Validity

Implicit intergroup biases are predictive of explicit biases,  behavioral outcomes,  and regional differences in inequality. 

Relationship to explicit prejudice & stereotypes. 

The relationship  between implicit and explicit measures of intergroup bias is consistently positive, but the size  of the relationship depends on the topic.  In a large-scale study of 57 attitudes (Nosek, 2005), the relationship between IAT scores and explicit intergroup attitudes was as high as r= .59 (Democrats vs. Republicans) and as low as r= .33 (European Americans vs. African Americans) or r = .10 (Thin people vs. Fat people). Generally, implicit-explicit relations are lower in studies on intergroup topics than in other topics (Cameron et al., 2012; Greenwald et al., 2009).The  strength  of  the  relationship  between  implicit  and explicit  intergroup  biases  is  moderated  by  factors which have been documented in one large-scale study and  several meta-analyses   (Cameron et al., 2012; Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005; Oswald et al., 2013). Much of this work has focused  on  the  IAT,  finding  that  implicit-explicit  relations  are  stronger  when  the  attitude  is  more  strongly elaborated, perceived as distinct from other people, has a  bipolar  structure  (i.e.,  liking  for  one  group  implies disliking  of  the  other),  and  the  explicit  measure  assesses a relative preference rather than an absolute preference (Greenwald et al., 2009; Hofmann et al., 2005; Nosek, 2005).

---------------------
Note: If you are a healthcare professional, you need to be aware of these biases.

Saturday, January 30, 2021

Scientific communication in a post-truth society

S. Iyengar & D. S. Massey
PNAS Apr 2019, 116 (16) 7656-7661

Abstract

Within the scientific community, much attention has focused on improving communications between scientists, policy makers, and the public. To date, efforts have centered on improving the content, accessibility, and delivery of scientific communications. Here we argue that in the current political and media environment faulty communication is no longer the core of the problem. Distrust in the scientific enterprise and misperceptions of scientific knowledge increasingly stem less from problems of communication and more from the widespread dissemination of misleading and biased information. We describe the profound structural shifts in the media environment that have occurred in recent decades and their connection to public policy decisions and technological changes. We explain how these shifts have enabled unscrupulous actors with ulterior motives increasingly to circulate fake news, misinformation, and disinformation with the help of trolls, bots, and respondent-driven algorithms. We document the high degree of partisan animosity, implicit ideological bias, political polarization, and politically motivated reasoning that now prevail in the public sphere and offer an actual example of how clearly stated scientific conclusions can be systematically perverted in the media through an internet-based campaign of disinformation and misinformation. We suggest that, in addition to attending to the clarity of their communications, scientists must also develop online strategies to counteract campaigns of misinformation and disinformation that will inevitably follow the release of findings threatening to partisans on either end of the political spectrum.

(cut)

At this point, probably the best that can be done is for scientists and their scientific associations to anticipate campaigns of misinformation and disinformation and to proactively develop online strategies and internet platforms to counteract them when they occur. For example, the National Academies of Science, Engineering, and Medicine could form a consortium of professional scientific organizations to fund the creation of a media and internet operation that monitors networks, channels, and web platforms known to spread false and misleading scientific information so as to be able to respond quickly with a countervailing campaign of rebuttal based on accurate information through Facebook, Twitter, and other forms of social media.

Saturday, January 16, 2021

Why Facts Are Not Enough: Understanding and Managing the Motivated Rejection of Science

Hornsey MJ. 
Current Directions in Psychological Science
2020;29(6):583-591. 

Abstract

Efforts to change the attitudes of creationists, antivaccination advocates, and climate skeptics by simply providing evidence have had limited success. Motivated reasoning helps make sense of this communication challenge: If people are motivated to hold a scientifically unorthodox belief, they selectively interpret evidence to reinforce their preferred position. In the current article, I summarize research on six psychological roots from which science-skeptical attitudes grow: (a) ideologies, (b) vested interests, (c) conspiracist worldviews, (d) fears and phobias, (e) personal-identity expression, and (f) social-identity needs. The case is made that effective science communication relies on understanding and attending to these underlying motivations.

(cut)

Conclusion

This article outlines six reasons people are motivated to hold views that are inconsistent with scientific consensus. This perspective helps explain why education and explication of data sometimes has a limited impact on science skeptics, but I am not arguing that education and facts are pointless. Quite the opposite: The provision of clear, objective information is the first and best line of defense against misinformation, mythmaking, and ignorance. However, for polarizing scientific issues—for example, climate change, vaccination, evolution, and in-vitro meat—it is clear that facts alone will not do the job. Successful communication around these issues will require sensitive understandings of the psychological motivations people have for rejecting science and the flexibility to devise communication frames that align with or circumvent these motivations.

Friday, November 27, 2020

Where Are The Self-Correcting Mechanisms In Science?

Vazire, S., & Holcombe, A. O. 
(2020, August 13).

Abstract

It is often said that science is self-correcting, but the replication crisis suggests that, at least in some fields, self-correction mechanisms have fallen short of what we might hope for. How can we know whether a particular scientific field has effective self-correction mechanisms, that is, whether its findings are credible? The usual processes that supposedly provide mechanisms for scientific self-correction – mainly peer review and disciplinary committees – have been inadequate. We argue for more verifiable indicators of a field’s commitment to self-correction. These include transparency, which is already a target of many reform efforts, and critical appraisal, which has received less attention. Only by obtaining Measurements of Observable Self-Correction (MOSCs) can we begin to evaluate the claim that “science is self-correcting.” We expect the validity of this claim to vary across fields and subfields, and suggest that some fields, such as psychology and biomedicine, fall far short of an appropriate level of transparency and, especially, critical appraisal. Fields without robust, verifiable mechanisms for transparency and critical appraisal cannot reasonably be said to be self-correcting, and thus do not warrant the credibility often imputed to science as a whole.

Wednesday, October 14, 2020

‘Disorders of consciousness’: Understanding ‘self’ might be the greatest scientific challenge of our time

new scientist full
Joel Frohlich
Genetic Literacy report
Originally published 18 Sept 20

Here are two excerpts:

Just as life stumped biologists 100 years ago, consciousness stumps neuroscientists today. It’s far from obvious why some brain regions are essential for consciousness and others are not. So Tononi’s approach instead considers the essential features of a conscious experience. When we have an experience, what defines it? First, each conscious experience is specific. Your experience of the colour blue is what it is, in part, because blue is not yellow. If you had never seen any colour other than blue, you would most likely have no concept or experience of colour. Likewise, if all food tasted exactly the same, taste experiences would have no meaning, and vanish. This requirement that each conscious experience must be specific is known as differentiation.

But, at the same time, consciousness is integrated. This means that, although objects in consciousness have different qualities, we never experience each quality separately. When you see a basketball whiz towards you, its colour, shape and motion are bound together into a coherent whole. During a game, you’re never aware of the ball’s orange colour independently of its round shape or its fast motion. By the same token, you don’t have separate experiences of your right and your left visual fields – they are interdependent as a whole visual scene.

Tononi identified differentiation and integration as two essential features of consciousness. And so, just as the essential features of life might lead a scientist to infer the existence of DNA, the essential features of consciousness led Tononi to infer the physical properties of a conscious system.

(cut)

Consciousness might be the last frontier of science. If IIT continues to guide us in the right direction, we’ll develop better methods of diagnosing disorders of consciousness. One day, we might even be able to turn to artificial intelligences – potential minds unlike our own – and assess whether or not they are conscious. This isn’t science fiction: many serious thinkers – including the late physicist Stephen Hawking, the technology entrepreneur Elon Musk, the computer scientist Stuart Russell at the University of California, Berkeley and the philosopher Nick Bostrom at the Future of Humanity Institute in Oxford – take recent advances in AI seriously, and are deeply concerned about the existential risk that could be posed by human- or superhuman-level AI in the future. When is unplugging an AI ethical? Whoever pulls the plug on the super AI of coming decades will want to know, however urgent their actions, whether there truly is an artificial mind slipping into darkness or just a complicated digital computer making sounds that mimic fear.

Tuesday, September 22, 2020

How to be an ethical scientist

W. A. Cunningham, J. J. Van Bavel,
& L. H. Somerville
Science Magazine
Originally posted 5 August 20

True discovery takes time, has many stops and starts, and is rarely neat and tidy. For example, news that the Higgs boson was finally observed in 2012 came 48 years after its original proposal by Peter Higgs. The slow pace of science helps ensure that research is done correctly, but it can come into conflict with the incentive structure of academic progress, as publications—the key marker of productivity in many disciplines—depend on research findings. Even Higgs recognized this problem with the modern academic system: “Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.”

It’s easy to forget about the “long view” when there is constant pressure to produce. So, in this column, we’re going to focus on the type of long-term thinking that advances science. For example, are you going to cut corners to get ahead, or take a slow, methodical approach? What will you do if your experiment doesn’t turn out as expected? Without reflecting on these deeper issues, we can get sucked into the daily goals necessary for success while failing to see the long-term implications of our actions.

Thinking carefully about these issues will not only impact your own career outcomes, but it can also impact others. Your own decisions and actions affect those around you, including your labmates, your collaborators, and your academic advisers. Our goal is to help you avoid pitfalls and find an approach that will allow you to succeed without impairing the broader goals of science.

Be open to being wrong

Science often advances through accidental (but replicable) findings. The logic is simple: If studies always came out exactly as you anticipated, then nothing new would ever be learned. Our previous theories of the world would be just as good as they ever were. This is why scientific discovery is often most profound when you stumble on something entirely new. Isaac Asimov put it best when he said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny ... .’”

The info is here.

Monday, September 21, 2020

The ethics of pausing a vaccine trial in the midst of a pandemic

Patrick Skerrett
statnews.com
Originally posted 11 Sept 20

Here is an excerpt:

Is the process for clinical trials of vaccines different from the process for drug or device trials?

Mostly no. The principles, design, and basic structure of a vaccine trial are more or less the same as for a trial for a new medication. The research ethics considerations are also similar.

The big difference between the two is that the participants in a preventive vaccine trial are, by and large, healthy people — or at least they are people who don’t have the illness for which the agent being tested might be effective. That significantly heightens the risk-benefit calculus for the participants.

Of course, some people in a Covid-19 vaccine trial could personally benefit if they live in communities with a lot of Covid-19. But even then, they might never get it. That’s very different than a trial in which individuals have a condition, say melanoma or malignant hypertension, and they are taking part in a trial of a therapy that could improve or even cure their condition.

Does that affect when a company might stop a trial?

In every clinical trial, the data and safety monitoring board takes routine and prescheduled looks at the accumulated data. They are checking mainly for two things: signals of harm and evidence of effectiveness.

These boards will recommend stopping a trial if they see a signal of concern or harm. They may do the same thing if they see solid evidence that people in the active arm of the trial are doing far better than those in the control arm.

In both cases, the action is taken on behalf of those participating in the trial. But it is also taken to advance the interests of people who would get this intervention if it was to be made publicly available.

The current situation with AstraZeneca involves a signal of concern. The company’s first obligation is to the participants in the trial. It cannot ethically proceed with the trial if there is reason for concern, even based on the experience of one participant.

Monday, September 14, 2020

Trump lied about science

H. Holden Thorp
Science
Originally published 11 Sept 20

When President Donald Trump began talking to the public about coronavirus disease 2019 (COVID-19) in February and March, scientists were stunned at his seeming lack of understanding of the threat. We assumed that he either refused to listen to the White House briefings that must have been occurring or that he was being deliberately sheltered from information to create plausible deniability for federal inaction. Now, because famed Washington Post journalist Bob Woodward recorded him, we can hear Trump’s own voice saying that he understood precisely that severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was deadly and spread through the air. As he was playing down the virus to the public, Trump was not confused or inadequately briefed: He flat-out lied, repeatedly, about science to the American people. These lies demoralized the scientific community and cost countless lives in the United States.

Over the years, this page has commented on the scientific foibles of U.S. presidents. Inadequate action on climate change and environmental degradation during both Republican and Democratic administrations have been criticized frequently. Editorials have bemoaned endorsements by presidents on teaching intelligent design, creationism, and other antiscience in public schools. These matters are still important. But now, a U.S. president has deliberately lied about science in a way that was imminently dangerous to human health and directly led to widespread deaths of Americans.

This may be the most shameful moment in the history of U.S. science policy.

In an interview with Woodward on 7 February 2020, Trump said he knew that COVID-19 was more lethal than the flu and that it spread through the air. “This is deadly stuff,” he said. But on 9 March, he tweeted that the “common flu” was worse than COVID-19, while economic advisor Larry Kudlow and presidential counselor Kellyanne Conway assured the public that the virus was contained. On 19 March, Trump told Woodward that he did not want to level with the American people about the danger of the virus. “I wanted to always play it down,” he said, “I still like playing it down.” Playing it down meant lying about the fact that he knew the country was in grave danger.

The info is here.

Tuesday, September 8, 2020

Pharma drew a line in the sand over Covid-19 vaccine readiness, because someone had to

Ed Silverman
statnews.com
Originally posted 7 Sept 20

Here is an excerpt:

The vaccine makers that are signing this pledge — Pfizer, Merck, AstraZeneca, Sanofi, GlaxoSmithKline, BioNTech, Johnson & Johnson, Moderna, and Novavax — are rushing to complete clinical trials. But only Pfizer has indicated it may have late-stage results in October, and that’s not a given.

Yet any move by the FDA to green light a Covid-19 vaccine without late-stage results will be interpreted as an effort to boost Trump — and rightly so.

Consider Trump’s erratic and selfish remarks. He recently accused the FDA of slowing the vaccine approval process and being part of a “deep state.” No wonder there is concern he may lean on Hahn to authorize emergency use prematurely. For his part, Hahn has insisted he won’t buckle to political pressure, but he also said emergency use may be authorized based on preliminary data.

“It’s unprecedented in my experience that industry would do something like this,” said Ira Loss of Washington Analysis, who tracks pharmaceutical regulatory and legislative matters for investors. “But we’ve experienced unprecedented events since the beginning of Covid-19, starting with the FDA, where the commissioner has proven to be malleable, to be kind, at the foot of the president.”

Remember, we’ve seen this movie before.

Amid criticism of his handling of the pandemic, Trump touted hydroxychloroquine, a decades-old malaria tablet, as a salve and the FDA authorized emergency use. Two weeks ago, he touted convalescent blood plasma as a medical breakthrough, but evidence of its effectiveness against the coronavirus is inconclusive. And Hahn initially overstated study results.

Most Americans seem to be catching on. A STAT-Harris poll released last week found that 78% of the public believes the vaccine approval process is driven by politics, not science. This goes for a majority of Democrats and Republicans.

The info is here.

Wednesday, September 2, 2020

Poll: Most Americans believe the Covid-19 vaccine approval process is driven by politics, not science

Ed Silverman
statnews.com
Originally published 31 August 20

Seventy-eight percent of Americans worry the Covid-19 vaccine approval process is being driven more by politics than science, according to a new survey from STAT and the Harris Poll, a reflection of concern that the Trump administration may give the green light to a vaccine prematurely.

The response was largely bipartisan, with 72% of Republicans and 82% of Democrats expressing such worries, according to the poll, which was conducted last week and surveyed 2,067 American adults.

The sentiment underscores rising speculation that President Trump may pressure the Food and Drug Administration to approve or authorize emergency use of at least one Covid-19 vaccine prior to the Nov. 3 election, but before testing has been fully completed.

Concerns intensified in recent days after Trump suggested in a tweet that the FDA is part of a “deep state” conspiracy to sabotage his reelection bid. In a speech Thursday night at the Republican National Convention, he pledged that the administration “will produce a vaccine before the end of the year, or maybe even sooner.”

The info is here.

Please see top line: 80% of Americans surveyed worry that approving vaccine too quickly would worry about safety.  The implication is that fewer people would choose the vaccine if safety is an issue.