Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Epistemology. Show all posts
Showing posts with label Epistemology. Show all posts

Wednesday, October 27, 2021

Reflective Reasoning & Philosophy

Nick Byrd
Philosophy Compass
First published: 29 September 2021

Abstract

Philosophy is a reflective activity. So perhaps it is unsurprising that many philosophers have claimed that reflection plays an important role in shaping and even improving our philosophical thinking. This hypothesis seems plausible given that training in philosophy has correlated with better performance on tests of reflection and reflective test performance has correlated with demonstrably better judgments in a variety of domains. This article reviews the hypothesized roles of reflection in philosophical thinking as well as the empirical evidence for these roles. This reveals that although there are reliable links between reflection and philosophical judgment among both laypeople and philosophers, the role of reflection in philosophical thinking may nonetheless depend in part on other factors, some of which have yet to be determined. So progress in research on reflection in philosophy may require further innovation in experimental methods and psychometric validation of philosophical measures.

From the Conclusion

Reflective reasoning is central to both philosophy and the cognitive science thereof. The theoretical and empirical research about reflection and its relation to philosophical thinking is voluminous. The existing findings provide preliminary evidence that reflective reasoning may be related to tendencies for certain philosophical judgments and beliefs over others. However, there are some signs that there is more to the story about reflection’s role in philosophical thinking than our current evidence can reveal. Scholars will need to continue developing new hypotheses, methods, and interpretations to reveal these hitherto latent details.

The recommendations in this article are by no means exhaustive. For instance, in addition to better experimental manipulations and measures of reflection (Byrd, 2021b), philosophers and cognitive scientists will also need to validate their measures of philosophical thinking to ensure that subtle differences in wording of thought experiments do not influence people’s judgments in unexpected ways (Cullen, 2010). After all, philosophical judgments can vary significantly depending on slight differences in wording even when reflection is not manipulated (e.g., Nahmias, Coates, & Kvaran, 2007). Scholars may also need to develop ways to empirically dissociate previously conflated philosophical judgments (Conway & Gawronski, 2013) in order to prevent and clarify misleading results (Byrd & Conway, 2019; Conway, GoldsteinGreenwood, Polacek, & Greene, 2018).

Monday, March 1, 2021

Morality justifies motivated reasoning in the folk ethics of belief

Corey Cusimano & Tania Lombrozo
Cognition
19 January 2021

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one's friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral considerations affect how people evaluate others' beliefs. First, the moral value of a belief affects the evidential threshold required to believe, such that morally beneficial beliefs demand less evidence than morally risky beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and on that basis, sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the General Discussion

5.2. Implications for motivated reasoning

Psychologists have long speculated that commonplace deviations from rational judgments and decisions could reflect commitments to different normative standards for decision making rather than merely cognitive limitations or unintentional errors (Cohen, 1981; Koehler, 1996; Tribe, 1971). This speculation has been largely confirmed in the domain of decision making, where work has documented that people will refuse to make certain decisions because of a normative commitment to not rely on certain kinds of evidence (Nesson, 1985; Wells, 1992), or because of a normative commitment to prioritize deontological concerns over utility-maximizing concerns (Baron & Spranca, 1997; Tetlock et al., 2000). And yet, there has been comparatively little investigation in the domain of belief formation. While some work has suggested that people evaluate beliefs in ways that favor non-objective, or non-evidential criteria (e.g., Armor et al., 2008; Cao et al., 2019; Metz, Weisberg, & Weisberg, 2018; Tenney et al., 2015), this work has failed to demonstrate that people prescribe beliefs that violate what objective, evidence-based reasoning would warrant. To our knowledge, our results are the first to demonstrate that people will knowingly endorse non-evidential norms for belief, and specifically, prescribe motivated reasoning to others.

(cut)

Our findings suggest more proximate explanations for these biases: That lay people see these beliefs as morally beneficial and treat these moral benefits as legitimate grounds for motivated reasoning. Thus, overconfidence or over-optimism may persist in communities because people hold others to lower standards of evidence for adopting morally-beneficial optimistic beliefs than they do for pessimistic beliefs, or otherwise treat these benefits as legitimate reasons to ignore the evidence that one has.

Sunday, February 7, 2021

How people decide what they want to know

Sharot, T., Sunstein, C.R. 
Nat Hum Behav 4, 14–19 (2020). 

Abstract

Immense amounts of information are now accessible to people, including information that bears on their past, present and future. An important research challenge is to determine how people decide to seek or avoid information. Here we propose a framework of information-seeking that aims to integrate the diverse motives that drive information-seeking and its avoidance. Our framework rests on the idea that information can alter people’s action, affect and cognition in both positive and negative ways. The suggestion is that people assess these influences and integrate them into a calculation of the value of information that leads to information-seeking or avoidance. The theory offers a framework for characterizing and quantifying individual differences in information-seeking, which we hypothesize may also be diagnostic of mental health. We consider biases that can lead to both insufficient and excessive information-seeking. We also discuss how the framework can help government agencies to assess the welfare effects of mandatory information disclosure.

Conclusion

It is increasingly possible for people to obtain information that bears on their future prospects, in terms of health, finance and even romance. It is also increasingly possible for them to obtain information about the past, the present and the future, whether or not that information bears on their personal lives. In principle, people’s decisions about whether to seek or avoid information should depend on some integration of instrumental value, hedonic value and cognitive value. But various biases can lead to both insufficient and excessive information-seeking. Individual differences in information-seeking may reflect different levels of susceptibility to those biases, as well as varying emphasis on instrumental, hedonic and cognitive utility.  Such differences may also be diagnostic of mental health.

Whether positive or negative, the value of information bears directly on significant decisions of government agencies, which are often charged with calculating the welfare effects of mandatory disclosure and which have long struggled with that task. Our hope is that the integrative framework of information-seeking motives offered here will facilitate these goals and promote future research in this important domain.

Saturday, August 8, 2020

How behavioural sciences can promote truth, autonomy and democratic discourse online

Lorenz-Spreen, P., Lewandowsky,
S., Sunstein, C.R. et al.
Nat Hum Behav (2020).
https://doi.org/10.1038/s41562-020-0889-7

Abstract

Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions—nudging and boosting— that enlist these cues to redesign online environments for informed and autonomous choice.

Here is an excerpt:

Another competence that could be boosted to help users deal more expertly with information they encounter online is the ability to make inferences about the reliability of information based on the social context from which it originates. The structure and details of the entire cascade of individuals who have previously shared an article on social media has been shown to serve as proxies for epistemic quality. More specifically, the sharing cascade contains metrics such as the depth and breadth of dissemination by others, with deep and narrow cascades indicating extreme or niche topics and breadth indicating widely discussed issues. A boosting intervention could provide this information (Fig. 3a) to display the full history of a post, including the original source, the friends and public users who disseminated it, and the timing of the process (showing, for example, if the information is old news that has been repeatedly and artificially amplified). Cascade statistics teaches concepts that may take some practice to read and interpret, and one may need to experience a number of cascades to learn to recognize informative patterns.

Friday, August 7, 2020

Your Ancestors Knew Death in Ways You Never Will

Donald McNeil, Jr.
The New York Times
Originally posted 15 July 20

Here is the end:

As a result, New Yorkers took certain steps — sometimes very expensive and contentious, but all based on science: They dug sewers to pipe filth into the Hudson and East Rivers instead of letting it pool in the streets. In 1842, they built the Croton Aqueduct to carry fresh water to Manhattan. In 1910, they chlorinated its water to kill more germs. In 1912, they began requiring dairies to heat their milk because a Frenchman named Louis Pasteur had shown that doing so spared children from tuberculosis. Over time, they made smallpox vaccination mandatory.

Libertarians battled almost every step. Some fought sewers and water mains being dug through their properties, arguing that they owned perfectly good wells and cesspools. Some refused smallpox vaccines until the Supreme Court put an end to that in 1905, in Jacobson v. Massachusetts.

In the Spanish flu epidemic of 1918, many New Yorkers donned masks but 4,000 San Franciscans formed an Anti-Mask League. (The city’s mayor, James Rolph, was fined $50 for flouting his own health department’s mask order.) Slowly, science prevailed, and death rates went down.

Today, Americans are facing the same choice our ancestors did: We can listen to scientists and spend money to save lives, or we can watch our neighbors die.

“The people who say ‘Let her rip, let’s go for herd immunity’ — that’s just public-health nihilism,” said Dr. Joia S. Mukherjee, the chief medical office of Partners in Health, a medical charity fighting the virus. “How many deaths do we have to accept to get there?”

A vaccine may be close at hand, and so may treatments like monoclonal antibodies that will cut our losses.

Till then, we need not accept death as our overlord — we can simply hang on and outlast him.

The info is here.

Friday, June 26, 2020

Debunking the Secular Case for Religion

Gurwinder Bhogal
rabbitholemag.com
Originally published 28 April 20

Here is an excerpt:

Could we, perhaps, identify the religious traditions that protect civilizations by looking at our history and finding the practices common to all long-lived civilizations? After all, Taleb has claimed that religion is “Lindy;” that is to say it has endured for a long time and therefore must be robust. But the main reason religious teachings have been robust is not that they’ve stood the test of time, but that those who tried to change them tended to be killed. Taleb also doesn’t explain what happens when religious practices differ or clash. Should people follow the precepts of the hardline Wahhabi brand of Islam, or those of a more moderate one? If the Abrahamic religions agree that usury leads to recessions, which of them do we consult on eating pork? Do we follow the Old Testament’s no or the New Testament’s yes, the green light of Christianity or the red light of Islam and Judaism?

Neither Taleb nor Peterson appear to answer these questions. But many evolutionary psychologists have: they say we should not blindly accept any religious edict, because none contain any inherent wisdom. The dominant view among evolutionary psychologists is that religion is not an evolutionary adaptation but a “spandrel,” a by-product of other adaptations. Richard Dawkins has compared religion to the tendency of moths to fly into flames: the moth did not evolve to fly into flames; it evolved to navigate by the light of the moon. Since it’s unable to distinguish between moonlight and candlelight, its attempt to keep a candle-flame in a fixed ommatidium (unit of a compound eye) causes it to keep veering around the flame, until it spirals into it. Dawkins argues that religion didn’t evolve for a purpose; it merely exploits the actual systems we evolved to navigate the world. An example of such a system might be what psychologist Justin Barrett calls the Hyperactive Agent Detection Device, the propensity to see natural phenomena as products of design. Basically, in our evolutionary history, mistaking a natural phenomenon for an artifact was far less risky than mistaking an artifact for a natural phenomenon, so our brains erred toward the former.

The info is here.

Tuesday, June 23, 2020

The Neuroscience of Moral Judgment: Empirical and Philosophical Developments

J. May, C. I. Workman, J. Haas, & H. Han
Forthcoming in Neuroscience and Philosophy,
eds. Felipe de Brigard & Walter Sinnott-Armstrong (MIT Press).

Abstract

We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. The field initially focused on brain areas associated with reason versus emotion in the moral evaluations of sacrificial dilemmas. But new threads of research have studied a wider range of moral evaluations and how they relate to models of brain development and learning. By weaving these threads together, we are developing a better understanding of the neurobiology of moral judgment in adulthood and to some extent in childhood and adolescence. Combined with rigorous evidence from psychology and careful philosophical analysis, neuroscientific evidence can even help shed light on the extent of moral knowledge and on ways to promote healthy moral development.

From the Conclusion

6.1 Reason vs. Emotion in Ethics

The dichotomy between reason and emotion stretches back to antiquity. But an improved understanding of the brain has, arguably more than psychological science, questioned the dichotomy (Huebner 2015; Woodward 2016). Brain areas associated with prototypical emotions, such as vmPFC and amygdala, are also necessary for complex learning and inference, even if largely automatic and unconscious. Even psychopaths, often painted as the archetype of emotionless moral monsters, have serious deficits in learning and inference. Moreover, even if our various moral judgments about trolley problems, harmless taboo violations, and the like are often automatic, they are nonetheless acquired through sophisticated learning mechanisms that are responsive to morally-relevant reasons (Railton 2017; Stanley et al. 2019). Indeed, normal moral judgment often involves gut feelings being attuned to relevant experience and made consistent with our web of moral beliefs (May & Kumar 2018).

The paper can be downloaded here.

Thursday, December 12, 2019

Donald Hoffman: The Case Against Reality

The Institute of Arts and Ideas
Originally published September 8, 2019


Many scientists believe that natural selection brought our perception of reality into clearer and deeper focus, reasoning that growing more attuned to the outside world gave our ancestors an evolutionary edge. Donald Hoffman, a cognitive scientist at the University of California, Irvine, thinks that just the opposite is true. Because evolution selects for survival, not accuracy, he proposes that our conscious experience masks reality behind millennia of adaptions for ‘fitness payoffs’ – an argument supported by his work running evolutionary game-theory simulations. In this interview recorded at the HowTheLightGetsIn Festival from the Institute of Arts and Ideas in 2019, Hoffman explains why he believes that perception must necessarily hide reality for conscious agents to survive and reproduce. With that view serving as a springboard, the wide-ranging discussion also touches on Hoffman’s consciousness-centric framework for reality, and its potential implications for our everyday lives.

Editor Note: If you work as a mental health professional, this video may be helpful in understanding perceptions, understanding self, and consciousness.

Thursday, December 5, 2019

Galileo’s Big Mistake

Galileo's Big MistakePhilip Goff
Scientific American Blog
Originally posted November 7, 2019

Here is an excerpt:

Galileo, as it were, stripped the physical world of its qualities; and after he’d done that, all that remained were the purely quantitative properties of matter—size, shape, location, motion—properties that can be captured in mathematical geometry. In Galileo’s worldview, there is a radical division between the following two things:
  • The physical world with its purely quantitative properties, which is the domain of science,
  • Consciousness, with its qualities, which is outside of the domain of science.
It was this fundamental division that allowed for the possibility of mathematical physics: once the qualities had been removed, all that remained of the physical world could be captured in mathematics. And hence, natural science, for Galileo, was never intended to give us a complete description of reality. The whole project was premised on setting qualitative consciousness outside of the domain of science.

What do these 17th century discussions have to do with the contemporary science of consciousness? It is now broadly agreed that consciousness poses a very serious challenge for contemporary science. Despite rapid progress in our understanding of the brain, we still have no explanation of how complex electrochemical signaling could give rise to a subjective inner world of colors, sounds, smells and tastes.

Although this problem is taken very seriously, many assume that the way to deal with this challenge is simply to continue with our standard methods for investigating the brain. The great success of physical science in explaining more and more of our universe ought to give us confidence, it is thought, that physical science will one day crack the puzzle of consciousness.

The blog post is here.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Sunday, March 3, 2019

When and why people think beliefs are “debunked” by scientific explanations for their origins

Dillon Plunkett, Lara Buchak, and Tania Lombrozo

Abstract

How do scientific explanations for beliefs affect people’s confidence in those beliefs? For example, do people think neuroscientific explanations for religious belief support or challenge belief in God? In five experiments, we find that the effects of scientific explanations for belief depend on whether the explanations imply normal or abnormal functioning (e.g., if a neural mechanism is doing what it evolved to do). Experiments 1 and 2 find that people think brain based explanations for religious, moral, and scientific beliefs corroborate those beliefs when the explanations invoke a normally functioning mechanism, but not an abnormally functioning mechanism. Experiment 3 demonstrates comparable effects for other kinds of scientific explanations (e.g., genetic explanations). Experiment 4 confirms that these effects derive from (im)proper functioning, not statistical (in)frequency. Experiment 5 suggests that these effects interact with people’s prior beliefs to produce motivated judgments: People are more skeptical of scientific explanations for their own beliefs if the explanations appeal to abnormal functioning, but they are less skeptical of scientific explanations of opposing beliefs if the explanations appeal to abnormal functioning. These findings suggest that people treat “normality” as a proxy for epistemic reliability and reveal that folk epistemic commitments shape attitudes towards scientific explanations.

The research is here.

Monday, February 11, 2019

Escape the echo chamber

By C Thi Nguyen
aeon.co
Originally posted April 9, 2018

Here is an excerpt:

Epistemic bubbles also threaten us with a second danger: excessive self-confidence. In a bubble, we will encounter exaggerated amounts of agreement and suppressed levels of disagreement. We’re vulnerable because, in general, we actually have very good reason to pay attention to whether other people agree or disagree with us. Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly. This is why we might do our homework in study groups, and have different laboratories repeat experiments. But not all forms of corroboration are meaningful. Ludwig Wittgenstein says: imagine looking through a stack of identical newspapers and treating each next newspaper headline as yet another reason to increase your confidence. This is obviously a mistake. The fact that The New York Times reports something is a reason to believe it, but any extra copies of The New York Times that you encounter shouldn’t add any extra evidence.

But outright copies aren’t the only problem here. Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection. The group’s unanimity is simply an echo of my selection criterion. It’s easy to forget how carefully pre-screened the members are, how epistemically groomed social media circles might be.

The information is here.

Tuesday, January 8, 2019

The 3 faces of clinical reasoning: Epistemological explorations of disparate error reduction strategies.

Sandra Monteiro, Geoff Norman, & Jonathan Sherbino
J Eval Clin Pract. 2018 Jun;24(3):666-673.

Abstract

There is general consensus that clinical reasoning involves 2 stages: a rapid stage where 1 or more diagnostic hypotheses are advanced and a slower stage where these hypotheses are tested or confirmed. The rapid hypothesis generation stage is considered inaccessible for analysis or observation. Consequently, recent research on clinical reasoning has focused specifically on improving the accuracy of the slower, hypothesis confirmation stage. Three perspectives have developed in this line of research, and each proposes different error reduction strategies for clinical reasoning. This paper considers these 3 perspectives and examines the underlying assumptions. Additionally, this paper reviews the evidence, or lack of, behind each class of error reduction strategies. The first perspective takes an epidemiological stance, appealing to the benefits of incorporating population data and evidence-based medicine in every day clinical reasoning. The second builds on the heuristic and bias research programme, appealing to a special class of dual process reasoning models that theorizes a rapid error prone cognitive process for problem solving with a slower more logical cognitive process capable of correcting those errors. Finally, the third perspective borrows from an exemplar model of categorization that explicitly relates clinical knowledge and experience to diagnostic accuracy.

A pdf can be downloaded here.

Wednesday, May 16, 2018

Escape the Echo Chamber

C Thi Nguyen
www.medium.com
Originally posted April 12, 2018

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making — wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trustpeople from the other side.

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.

The information is here.

Thursday, May 3, 2018

Why Pure Reason Won’t End American Tribalism

Robert Wright
www.wired.com
Originally published April 9, 2018

Here is an excerpt:

Pinker also understands that cognitive biases can be activated by tribalism. “We all identify with particular tribes or subcultures,” he notes—and we’re all drawn to opinions that are favored by the tribe.

So far so good: These insights would seem to prepare the ground for a trenchant analysis of what ails the world—certainly including what ails an America now famously beset by political polarization, by ideological warfare that seems less and less metaphorical.

But Pinker’s treatment of the psychology of tribalism falls short, and it does so in a surprising way. He pays almost no attention to one of the first things that springs to mind when you hear the word “tribalism.” Namely: People in opposing tribes don’t like each other. More than Pinker seems to realize, the fact of tribal antagonism challenges his sunny view of the future and calls into question his prescriptions for dispelling some of the clouds he does see on the horizon.

I’m not talking about the obvious downside of tribal antagonism—the way it leads nations to go to war or dissolve in civil strife, the way it fosters conflict along ethnic or religious lines. I do think this form of antagonism is a bigger problem for Pinker’s thesis than he realizes, but that’s a story for another day. For now the point is that tribal antagonism also poses a subtler challenge to his thesis. Namely, it shapes and drives some of the cognitive distortions that muddy our thinking about critical issues; it warps reason.

The article is here.

Monday, March 5, 2018

Donald Trump and the rise of tribal epistemology

David Roberts
Vox.com
Originally posted May 19, 2017 and still extremely important

Here is an excerpt:

Over time, this leads to what you might call tribal epistemology: Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.

Now tribal epistemology has found its way to the White House.

Donald Trump and his team represent an assault on almost every American institution — they make no secret of their desire to “deconstruct the administrative state” — but their hostility toward the media is unique in its intensity.

It is Trump’s obsession and favorite target. He sees himself as waging a “running war” on the mainstream press, which his consigliere Steve Bannon calls “the opposition party.”

The article is here.

Saturday, November 25, 2017

Rather than being free of values, good science is transparent about them

Kevin Elliott
The Conversation
Originally published November 8, 2017

Scientists these days face a conundrum. As Americans are buffeted by accounts of fake news, alternative facts and deceptive social media campaigns, how can researchers and their scientific expertise contribute meaningfully to the conversation?

There is a common perception that science is a matter of hard facts and that it can and should remain insulated from the social and political interests that permeate the rest of society. Nevertheless, many historians, philosophers and sociologists who study the practice of science have come to the conclusion that trying to kick values out of science risks throwing the baby out with the bathwater.

Ethical and social values – like the desire to promote economic development, public health or environmental protection – often play integral roles in scientific research. By acknowledging this, scientists might seem to give away their authority as a defense against the flood of misleading, inaccurate information that surrounds us. But I argue in my book “A Tapestry of Values: An Introduction to Values in Science” that if scientists take appropriate steps to manage and communicate about their values, they can promote a more realistic view of science as both value-laden and reliable.

The article is here.

Wednesday, November 22, 2017

Many Academics Are Eager to Publish in Worthless Journals

Gina Kolata
The New York Times
Originally published October 30, 2017

Here is an excerpt:

Yet “every university requires some level of publication,” said Lawrence DiPaolo, vice president of academic affairs at Neumann University in Aston, Pa.

Recently a group of researchers invented a fake academic: Anna O. Szust. The name in Polish means fraudster. Dr. Szust applied to legitimate and predatory journals asking to be an editor. She supplied a résumé in which her publications and degrees were total fabrications, as were the names of the publishers of the books she said she had contributed to.

The legitimate journals rejected her application immediately. But 48 out of 360 questionable journals made her an editor. Four made her editor in chief. One journal sent her an email saying, “It’s our pleasure to add your name as our editor in chief for the journal with no responsibilities.”

The lead author of the Dr. Szust sting operation, Katarzyna Pisanski, a psychologist at the University of Sussex in England, said the question of what motivates people to publish in such journals “is a touchy subject.”

“If you were tricked by spam email you might not want to admit it, and if you did it wittingly to increase your publication counts you might also not want to admit it,” she said in an email.

The consequences of participating can be more than just a résumé freckled with poor-quality papers and meeting abstracts.

Publications become part of the body of scientific literature.

There are indications that some academic institutions are beginning to wise up to the dangers.

Dewayne Fox, an associate professor of fisheries at Delaware State University, sits on a committee at his school that reviews job applicants. One recent applicant, he recalled, listed 50 publications in such journals and is on the editorial boards of some of them.

A few years ago, he said, no one would have noticed. But now he and others on search committees at his university have begun scrutinizing the publications closely to see if the journals are legitimate.

The article is here.

Monday, November 20, 2017

Why we pretend to know things, explained by a cognitive scientist

Sean Illing
Vox.com
Originally posted November 3, 2017

Why do people pretend to know things? Why does confidence so often scale with ignorance? Steven Sloman, a professor of cognitive science at Brown University, has some compelling answers to these questions.

“We're biased to preserve our sense of rightness,” he told me, “and we have to be.”

The author of The Knowledge Illusion: Why We Never Think Alone, Sloman’s research focuses on judgment, decision-making, and reasoning. He’s especially interested in what’s called “the illusion of explanatory depth.” This is how cognitive scientists refer to our tendency to overestimate our understanding of how the world works.

We do this, Sloman says, because of our reliance on other minds.

“The decisions we make, the attitudes we form, the judgments we make, depend very much on what other people are thinking,” he said.

If the people around us are wrong about something, there’s a good chance we will be too. Proximity to truth compounds in the same way.

In this interview, Sloman and I talk about the problem of unjustified belief. I ask him about the political implications of his research, and if he thinks the rise of “fake news” and “alternative facts” has amplified our cognitive biases.

The interview/article is here.

Tuesday, August 29, 2017

Must science be testable?

Massimo Pigliucci
Aeon
Originally published August 10, 2016

Here is an excerpt:

hat said, the publicly visible portion of the physics community nowadays seems split between people who are openly dismissive of philosophy and those who think they got the pertinent philosophy right but their ideological opponents haven’t. At stake isn’t just the usually tiny academic pie, but public appreciation of and respect for both the humanities and the sciences, not to mention millions of dollars in research grants (for the physicists, not the philosophers). Time, therefore, to take a more serious look at the meaning of Popper’s philosophy and why it is still very much relevant to science, when properly understood.

As we have seen, Popper’s message is deceptively simple, and – when repackaged in a tweet – has in fact deceived many a smart commentator in underestimating the sophistication of the underlying philosophy. If one were to turn that philosophy into a bumper sticker slogan it would read something like: ‘If it ain’t falsifiable, it ain’t science, stop wasting your time and money.’

But good philosophy doesn’t lend itself to bumper sticker summaries, so one cannot stop there and pretend that there is nothing more to say. Popper himself changed his mind throughout his career about a number of issues related to falsification and demarcation, as any thoughtful thinker would do when exposed to criticisms and counterexamples from his colleagues. For instance, he initially rejected any role for verification in establishing scientific theories, thinking that it was far too easy to ‘verify’ a notion if one were actively looking for confirmatory evidence. Sure enough, modern psychologists have a name for this tendency, common to laypeople as well as scientists: confirmation bias.

Nonetheless, later on Popper conceded that verification – especially of very daring and novel predictions – is part of a sound scientific approach. After all, the reason Einstein became a scientific celebrity overnight after the 1919 total eclipse is precisely because astronomers had verified the predictions of his theory all over the planet and found them in satisfactory agreement with the empirical data.

The article is here.