Gregory E. Kaebnick
Boston Review
Originally published 30 April 21
Here is an excerpt:
The way to square this circle is to acknowledge that what objectivity science is able to deliver derives not from individual scientists but from the social institutions and practices that structure their work. The philosopher of science Karl Popper expressed this idea clearly in his 1945 book The Open Society and Its Enemies. “There is no doubt that we are all suffering under our own system of prejudices,” he acknowledged—“and scientists are no exception to this rule.” But this is no threat to objectivity, he argued—not because scientists manage to liberate themselves from their prejudices, but rather because objectivity is “closely bound up with the social aspect of scientific method.” In particular, “science and scientific objectivity do not (and cannot) result from the attempts of an individual scientist to be ‘objective,’ but from the friendly-hostile co-operation of many scientists.” Thus Robinson Crusoe cannot be a scientist, “For there is nobody but himself to check his results.”
More recently, philosophers and historians of science such as Helen Longino, Miriam Solomon, and Naomi Oreskes have developed detailed arguments along similar lines, showing how the integrity and objectivity of scientific knowledge depend crucially on social practices. Science even sometimes advances not in spite but because of scientists’ filters and biases—whether a tendency to focus single-mindedly on a particular set of data, a desire to beat somebody else to an announcement, a contrarian streak, or an overweening self-confidence. Any vision of science that makes it depend on complete disinterestedness is doomed to make science impossible. Instead, we must develop a more widespread appreciation of the way science depends on protocols and norms that scientists have collectively developed for testing, refining, and disseminating scientific knowledge. A scientist must be able to show that research has met investigative standards, that it has been exposed to criticism, and that criticisms can be met with arguments.
The implication is that science works not so much because scientists have a special ability to filter out their biases or to access the world as it really is, but instead because they are adhering to a social process that structures their work—constraining and channeling their predispositions and predilections, their moments of eureka, their large yet inevitably limited understanding, their egos and their jealousies. These practices and protocols, these norms and standards, do not guarantee mistakes are never made. But nothing can make that guarantee. The rules of the game are themselves open to scrutiny and revision in light of argument, and that is the best we can hope for.
This way of understanding science fares better than the exalted view, which makes scientific knowledge impossible. Like all human endeavors, science is fallible, but still it warrants belief—according to how well it adheres to rules we have developed for it. What makes for objectivity and expertise is not, or not merely, the simple alignment between what one claims and how the world is, but a commitment to a process that is accepted as producing empirical adequacy.