Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Epistemology. Show all posts
Showing posts with label Epistemology. Show all posts

Wednesday, August 16, 2017

What Does Patient Autonomy Mean for Doctors and Drug Makers?

Christina Sandefur
The Conversation
Originally published July 26, 2017

Here is an excerpt:

Although Bateman-House fears that deferring to patients comes at the expense of physician autonomy, she also laments that physicians currently abuse the freedom they have, failing to spend enough time with their patients, which she says undermines a patient’s ability to make informed medical decisions.

Even if it’s true that physician consultations aren’t as thorough as they once were, patients today have better access to health care information than ever before. According to the Pew Research Center, two-thirds of U.S. adults have broadband internet in their homes, and 13 percent who lack it can access the internet through a smartphone. Pew reports that more than half of adult internet users go online to get information on medical conditions, 43 percent on treatments, and 16 percent on drug safety. Yet despite their desire to research these issues online, 70 percent still sought out additional information from a doctor or other professional.

In other words, people are making greater efforts to learn about health care on their own. True, not all such information on the internet is accurate. But encouraging patients to seek out information from multiple sources is a good thing. In fact, requiring government approval of treatments may lull patients into a false sense of security. As Connor Boyack, president of the Libertas Institute, points out, “Instead of doing their own due diligence and research, the overwhelming majority of people simply concern themselves with whether or not the FDA says a certain product is okay to use.” But blind reliance on a government bureaucracy is rarely a good idea.

The article can be found here.

Wednesday, June 14, 2017

Should We Outsource Our Moral Beliefs to Others?

Grace Boey
3 Quarks Daily
Originally posted May 29, 2017

Here is an excerpt:

Setting aside the worries above, there is one last matter that many philosophers take to be the most compelling candidate for the oddity of outsourcing our moral beliefs to others. As moral agents, we’re interested in more than just accumulating as many true moral beliefs as possible, such as ‘abortion is permissible’, or ‘killing animals for sport is wrong’. We also value things such as developing moral understanding, cultivating virtuous characters, having appropriate emotional reactions, and the like. Although moral deference might allow us to acquire bare moral knowledge from others, it doesn’t allow us to reflect or cultivate these other moral goods which are central to our moral identity.

Consider the value we place on understanding why we think our moral beliefs are true. Alison Hills notes that pure moral deference can’t get us to such moral understanding. When Bob defers unquestioningly to Sally’s judgment that abortion is morally permissible, he lacks an understanding of why this might be true. Amongst other things, this prevents Bob from being able to articulate, in his own words, the reasons behind this claim. This seems strange enough in itself, and Hills argues for at least two reasons why Bob’s situation is a bad one. For one, Bob’s lack of moral understanding prevents him from acting in a morally worthy way. Bob wouldn’t deserve any moral praise for, say, shutting down someone who harasses women who undergo the procedure.

Moreover, Bob’s lack of moral understanding seems to reflect a lack of good moral character, or virtue. Bob’s belief that ‘late-term abortion is permissible’ isn’t integrated with the rest of his thoughts, motivations, emotions, and decisions. Moral understanding, of course, isn’t all that matters for virtue and character. But philosophers who disagree with Hills on this point, like Robert Howell and Errol Lord, also note that moral deference reflects a lack of virtue and character in other ways, and can prevent the cultivation of these traits.

The article is here.

Sunday, April 30, 2017

Why Expertise Matters

Adam Frank
npr.org
Originally posted on April 7, 2017

Here is an excerpt:

The attack on expertise was given its most visceral form by British politician Michael Gove during the Brexit campaign last year when he famously claimed, "people in this country have had enough of experts." The same kinds of issues, however, are also at stake here in the U.S. in our discussions about "alternative facts," "fake news" and "denial" of various kinds. That issue can be put as a simple question: When does one opinion count more than another?

By definition, an expert is someone whose learning and experience lets them understand a subject deeper than you or I do (assuming we're not an expert in that subject, too). The weird thing about having to write this essay at all is this: Who would have a problem with that? Doesn't everyone want their brain surgery done by an expert surgeon rather than the guy who fixes their brakes? On the other hand, doesn't everyone want their brakes fixed by an expert auto mechanic rather than a brain surgeon who has never fixed a flat?

Every day, all of us entrust our lives to experts from airline pilots to pharmacists. Yet, somehow, we've come to a point where people can put their ignorance on a subject of national importance on display for all to see — and then call it a virtue.

Here at 13.7, we've seen this phenomenon many times. When we had a section for comments, it would quickly fill up with statements like "the climate is always changing" or "CO2 is a trace gas so it doesn't matter" when we a posted pieces on the science of climate change.

The article is here.

Friday, April 28, 2017

How rational is our rationality?

Interview by Richard Marshall
3 AM Magazine
Originally posted March 18, 2017

Here is an excerpt:

As I mentioned earlier, I think that the point of the study of rationality, and of normative epistemology more generally, is to help us figure out how to inquire, and the aim of inquiry, I believe, is to get at the truth. This means that there had better be a close connection between what we conclude about what’s rational to believe, and what we expect to be true. But it turns out to be very tricky to say what the nature of this connection is! For example, we know that sometimes evidence can mislead us, and so rational beliefs can be false. This means that there’s no guarantee that rational beliefs will be true. The goal of the paper is to get clear about why, and to what extent, it nonetheless makes sense to expect that rational beliefs will be more accurate than irrational ones. One reason this should be of interest to non-philosophers is that if it turns out that there isn’t some close connection between rationality and truth, then we should be much less critical of people with irrational beliefs. They may reasonably say: “Sure, my belief is irrational – but I care about the truth, and since my irrational belief is true, I won’t abandon it!” It seems like there’s something wrong with this stance, but to justify why it’s wrong, we need to get clear on the connection between a judgment about a belief’s rationality and a judgment about its truth. The account I give is difficult to summarize in just a few sentences, but I can say this much: what we say about the connection between what’s rational and what’s true will depend on whether we think it’s rational to doubt our own rationality. If it can be rational to doubt our own rationality (which I think is plausible), then the connection between rationality and truth is, in a sense, surprisingly tenuous.

The interview is here.

Tuesday, March 28, 2017

Why We Believe Obvious Untruths

Philip Fernbach & Steven Sloman
The New York Times
Originally published March 3, 2017

'How can so many people believe things that are demonstrably false? The question has taken on new urgency as the Trump administration propagates falsehoods about voter fraud, climate change and crime statistics that large swaths of the population have bought into. But collective delusion is not new, nor is it the sole province of the political right. Plenty of liberals believe, counter to scientific consensus, that G.M.O.s are poisonous, and that vaccines cause autism.

The situation is vexing because it seems so easy to solve. The truth is obvious if you bother to look for it, right? This line of thinking leads to explanations of the hoodwinked masses that amount to little more than name calling: “Those people are foolish” or “Those people are monsters.”

Such accounts may make us feel good about ourselves, but they are misguided and simplistic: They reflect a misunderstanding of knowledge that focuses too narrowly on what goes on between our ears. Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works.

What really sets human beings apart is not our individual mental capacity. The secret to our success is our ability to jointly pursue complex goals by dividing cognitive labor. Hunting, trade, agriculture, manufacturing — all of our world-altering innovations — were made possible by this ability. Chimpanzees can surpass young children on numerical and spatial reasoning tasks, but they cannot come close on tasks that require collaborating with another individual to achieve a goal. Each of us knows only a little bit, but together we can achieve remarkable feats.

Monday, October 31, 2016

A Plan To Defend Against the War on Science

By Shawn Otto
Scientific American
Originally published October 9, 2016

Here is an excerpt:

In the years since, the situation has gotten worse. We’ve seen the emergence of a “post-fact” politics, which has normalized the denial of scientific evidence that conflicts with the political, religious or economic agendas of authority. Much of this denial centers, now somewhat predictably, around climate change—but not all. If there is a single factor to consider as a barometer that evokes all others in this election, it is the candidates’ attitudes toward science.

Consider, for example, what has been occurring in Congress. Rep. Lamar Smith, the Texas Republican who chairs the House Committee on Science, Space and Technology, is a climate change denier. Smith has used his post to initiate a series of McCarthy-style witch-hunts, issuing subpoenas and demanding private correspondence and testimony from scientists, civil servants, government science agencies, attorneys general and nonprofit organizations whose work shows that global warming is happening, humans are causing it and that—surprise—energy companies sought to sow doubt about this fact.

The article is here.

Sunday, August 14, 2016

The Ethics of Artificial Intelligence in Intelligence Agencies

Cortney Weinbaum
The National Interest
Originally published July 18, 2016

Here is an excerpt:

Consider what could happen if the intelligence community creates a policy similar to the Pentagon directive and requires a human operator be allowed to intervene at any moment. One day the computer warns of an imminent attack, but the human analyst disagrees with the AI intelligence assessment. Does the CIA warn the president that an attack is about to occur? How is the human analyst’s assessment valued against the AI-generated intelligence?

 Or imagine that a highly sophisticated foreign country infiltrates the most sensitive U.S. intelligence systems, gains access to the algorithms and replaces the programming code with its own. The hacked AI system is no longer capable of providing accurate intelligence on that country.

The article is here.

Thursday, July 7, 2016

The Mistrust of Science

By Atul Gawande
The New Yorker
Originally posted June 10, 2016

Here are two excerpts:

The scientific orientation has proved immensely powerful. It has allowed us to nearly double our lifespan during the past century, to increase our global abundance, and to deepen our understanding of the nature of the universe. Yet scientific knowledge is not necessarily trusted. Partly, that’s because it is incomplete. But even where the knowledge provided by science is overwhelming, people often resist it—sometimes outright deny it. Many people continue to believe, for instance, despite massive evidence to the contrary, that childhood vaccines cause autism (they do not); that people are safer owning a gun (they are not); that genetically modified crops are harmful (on balance, they have been beneficial); that climate change is not happening (it is).

(cut)

People are prone to resist scientific claims when they clash with intuitive beliefs. They don’t see measles or mumps around anymore. They do see children with autism. And they see a mom who says, “My child was perfectly fine until he got a vaccine and became autistic.”

Now, you can tell them that correlation is not causation. You can say that children get a vaccine every two to three months for the first couple years of their life, so the onset of any illness is bound to follow vaccination for many kids. You can say that the science shows no connection. But once an idea has got embedded and become widespread, it becomes very difficult to dig it out of people’s brains—especially when they do not trust scientific authorities. And we are experiencing a significant decline in trust in scientific authorities.

The article is here.

Wednesday, June 15, 2016

There’s Argument, and there’s Disputation

by Iain Brassington
British Medical Journal Blogs
Originally posted June 6, 2016

Here is an excerpt:

Basically, the problem is this: that the model for debating contests is, presumably, based around the idea that debate is an effective way to whittle bad ideas away from good; if each participant is a doughty falsificationist, and equally able in debate as his opponent, then at the end of a process of debate, we’ll be closer to the truth of the matter than we were at the start.  So far, so good.  But there’s a handful of fairly obvious problems with that model.  First, that doesn’t lend itself to the idea that there is a winner and a loser in any particular debate.  Second, a shoddy argument presented by a good speaker might win a competitive debate over a good argument presented by a diffident speaker.  We might hope that a competent judge would account for that, but it’d be better if there wasn’t any need to solve what looks to be a structural problem to begin with.  Third – which is related, but probably more importantly when it comes to ethics – someone with a good understanding of the moral arguments and who is a decent orator might stand a fair chance of winning an argument; but it doesn’t follow that a good orator who’s won an argument has any particular understanding of the moral arguments.  Debating contests reward people for being good at debate; but that’s presumably not the true end of ethics education.  Fourth, this kind of strategy is possibly OK in politics, in which the point of oratory is to persuade people to adopt a certain cause; and so debating competitions might provide training for that.  (I suspect that that’s something like the rationale behind things like the IofI’s competition in schools: it’s directed at developing a certain set of skills, with one eye on a vivacious public debate.  Whatever my private suspicions of the IofI generally, that doesn’t seem like a bad idea.)  But ethical debate is qualitatively different.  It isn’t really about winning converts.  Or, at least: one might hope that a convincing argument would have moral gravity and attract agreement, but the mood of the thing is different.

The article is here.

Wednesday, May 18, 2016

Biological determinism and its enemies

Radosław Zyzik
Philosophy in Neuroscience, eds. Jerzy Stelmach, Bartosz Brożek, Łukasz Kurek, Copernicus Center Press 2012.

Here is an excerpt:

Little research (if any) has addressed the problem of determinism from more than one perspective at the same time. On the one hand, one can read about the neuroscience of free will and the renaissance
of determinism due to the work of neuroscientists. On the other, a new face of genetic determinism is discussed as a result of the progress made in genetics. Moreover, today we can also learn about the impact of biological factors on the development of model organisms in neurogenetics. With this in mind, we have tried to investigate how determinism is understood in neuroscience, behavioural genetics and in a new discipline which combines knowledge from many disciplines – neurogenetics.

We believe that only such a broad perspective will eventually allow an understanding of determinism in biology with all of its shortcomings. Therefore, the aim of our study is to evaluate the philosophical interpretations of neuroscientific, genetic and neurogenetic experiments that can be seen to be in line with the thesis of biological determinism. The paper re-examines the tacit philosophical assumptions, applied methodology and interpretation of the results of the experiments.

The book chapter is here.

Tuesday, May 10, 2016

Cadaver study casts doubts on how zapping brain may boost mood, relieve pain

By Emily Underwood
Science
Originally posted April 20, 2016

Here is an excerpt:

Buzsáki expects a living person’s skin would shunt even more current away from the brain because it is better hydrated than a cadaver’s scalp. He agrees, however, that low levels of stimulation may have subtle effects on the brain that fall short of triggering neurons to fire. Electrical stimulation might also affect glia, brain cells that provide neurons with nutrients, oxygen, and protection from pathogens, and also can influence the brain’s electrical activity. “Further questions should be asked” about whether 1- to 2-milliamp currents affect those cells, he says.

Buzsáki, who still hopes to use such techniques to enhance memory, is more restrained than some critics. The tDCS field is “a sea of bullshit and bad science—and I say that as someone who has contributed some of the papers that have put gas in the tDCS tank,” says neuroscientist Vincent Walsh of University College London. “It really needs to be put under scrutiny like this.”

The article is here.

Editor's note:

This article represents the importance of science in the treatment of human suffering. No one wants sham interventions.

However, the stimulation interventions may work, and work effectively, in light of other models of how the brain functions. The brain creates an electromagnetic field that moves beyond the skull.  If the cadaver's brain is shut off, this finding may be irrelevant as the stimulation affects the field that moves beyond the skull.  In other words, how these stimulation procedures influence the electromagnetic field of the brain may be a better model to explain improvement.

Therefore, using dead people to nullify what happens in living people may not be the best standard to evaluate a procedure when researching brain activity.  It is a step to consider and may help develop a better working model of what actually happens with TMS.

By the way, scientists are not exactly certain how lithium or antidepressants work, either.

Wednesday, April 13, 2016

Stereotype Threat, Epistemic Injustice, and Rationality

Stacey Goguen
Draft, forthcoming (2016) in Brownstein and Saul (eds), Implicit Bias and Philosophy, Vol I,
Oxford University Press.

Stereotype threat is most well-known for its ability to hinder performance. However, it actually has a  wide range of effects. For instance, it can also cause stress, anxiety, and self-doubt. These additional effects are as important and as central to the phenomenon as its effects on performance are. As a result, stereotype threat has more far-reaching implications than many philosophers have realized. In particular, the phenomenon has a number of unexplored “epistemic effects.

These are effects on our epistemic lives — i.e., the ways we engage with the world as actual and potential knowers. In this paper I flesh out the implications of a specific epistemic effect: self-doubt. Certain kinds of self-doubt can deeply affect our epistemic lives by exacerbating moments of epistemic injustice and by perniciously interacting with ideals of rationality. In both cases, self-doubt can lead to one questioning one’s own humanity or full personhood. Because stereotype threat can trigger this kind of self-doubt, it can affect various aspects of ourselves besides our ability to perform to our potential. It can also affect our very sense of self. In this paper, I argue that we should adopt a more comprehensive account of stereotype threat that explicitly acknowledges all of the known effects of the phenomenon. Doing so will allow us to better investigate the epistemological implications of stereotype threat, as well as the full extent of its reach into our lives. I focus on fleshing out stereotype threat’s effect of self-doubt, and how this effect can influence the very foundations of our epistemic lives. I do this by arguing that self-doubt from stereotype threat can constitute an epistemic injustice, and that this sort of self-doubt can be exacerbated by stereotypes of irrationality. As a result, self-doubt from stereotype threat can erode our faith in ourselves as full human persons and as rational, reliable knowers.

The full text is here.

Tuesday, April 12, 2016

Rationalization in Moral and Philosophical Thought

Eric Schwitzgebel and Jonathan Ellis

Abstract

Rationalization, in our intended sense of the term, occurs when a person favors a particular conclusion as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, if that factor then biases the person’s subsequent search for, and assessment of, potential justifications for the conclusion.  Empirical evidence suggests that rationalization is common in ordinary people’s moral and philosophical thought.  We argue that it is likely that the moral and philosophical thought of philosophers and moral psychologists is also pervaded by rationalization.  Moreover, although rationalization has some benefits, overall it would be epistemically better if the moral and philosophical reasoning of both ordinary people and professional academics were not as heavily influenced by rationalization as it likely is.  We discuss the significance of our arguments for cognitive management and epistemic responsibility.

The paper is here.

Monday, January 11, 2016

A Fight for the Soul of Science

By Natalie Wolchover
Quanta Magazine
Originally published December 16, 2015

Here are two excerpts:

Critics accuse string theory and the multiverse hypothesis, as well as cosmic inflation — the leading theory of how the universe began — of falling on the wrong side of Popper’s line of demarcation. To borrow the title of the Columbia University physicist Peter Woit’s 2006 book on string theory, these ideas are “not even wrong,” say critics. In their editorial, Ellis and Silk invoked the spirit of Popper: “A theory must be falsifiable to be scientific.”

(cut)

Nowadays, as several philosophers at the workshop said, Popperian falsificationism has been supplanted by Bayesian confirmation theory, or Bayesianism, a modern framework based on the 18th-century probability theory of the English statistician and minister Thomas Bayes. Bayesianism allows for the fact that modern scientific theories typically make claims far beyond what can be directly observed — no one has ever seen an atom — and so today’s theories often resist a falsified-unfalsified dichotomy. Instead, trust in a theory often falls somewhere along a continuum, sliding up or down between 0 and 100 percent as new information becomes available. “The Bayesian framework is much more flexible” than Popper’s theory, said Stephan Hartmann, a Bayesian philosopher at LMU. “It also connects nicely to the psychology of reasoning.”

The entire article is here.

Saturday, December 19, 2015

Three Types of Moral Supervenience

By John Danaher
Philosophical Disquisitions
Originally published November 7, 2014

Here are two excerpts:

As you know, metaethics is about the ontology and epistemology of morality. Take a moral claim like “torturing innocent children for fun is wrong”. A metaethicist wants to know what, if anything, entitles us to make such a claim. On the ontological side, they want to know what is it that makes the torturing of innocent children wrong (what grounds or explains the ascription of that moral property to that event?). On the epistemological side, they wonder how it is that we come to know that the torturing of innocent children is wrong (how to we acquire moral knowledge?). Both questions are interesting — and vital to ask if you wish to develop a sensible worldview — but in discussing moral supervenience we are focused primarily on the ontological one.

(cut)

The supervenience of the moral on the non-moral is generally thought to give rise to a philosophical puzzle. JL Mackie famously argued that the if the moral truly did supervene on the non-moral, then this was metaphysically “queer”. We were owed some plausible account of why this happens. He didn’t think we had such an account, which is one reason why he was an moral error theorist. Others are less pessimistic. They think there are ways in which to account for moral supervenience.

The blog post is here.

Saturday, November 7, 2015

The End of Expertise

By Bill Fischer
Harvard Business Review
Originally published October 19, 2015

Here is an excerpt:

Increasingly, expertise is losing the respect that for years had earned it premiums in any market where uncertainty was present and complex knowledge valued. Along with it, we are shedding our reverence for “expert evaluation,” losing our regard for our Michelin guides and casting our lot in with the peer-generated Yelps of the world.

Not only is the character of expertise changing, but at the same time, new client needs are emerging. Firms are fearful of being vulnerable to an unknown (not uncertain) future; and at the same time, conditioned by living in an internet world, they expect instant knowledge responses at reasonable prices. Expertise providers are finding that the models that they have long relied upon (e.g., the familiar five forces model) are losing some of their potency, as they are based upon assumed knowledge that is increasingly difficult to determine (What industry are we in? Who are our competitors? What are our core-competencies?), and are more like time-lapse photography in presentation than the customer’s contemporary expectations of real-time, virtual streaming engagement.

The entire article is here.

Tuesday, September 15, 2015

Explanatory Judgment, Moral Offense and Value-Free Science

By Matteo Colombo, Leandra Bucher, & Yoel Inbar
Review of Philosophy and Psychology
August 2015

Abstract

A popular view in philosophy of science contends that scientific reasoning is objective to the extent that the appraisal of scientific hypotheses is not influenced by moral, political, economic, or social values, but only by the available evidence. A large body of results in the psychology of motivated-reasoning has put pressure on the empirical adequacy of this view. The present study extends this body of results by providing direct evidence that the moral offensiveness of a scientific hypothesis biases explanatory judgment along several dimensions, even when prior credence in the hypothesis is controlled for. Furthermore, it is shown that this bias is insensitive to an economic incentive to be accurate in the evaluation of the evidence. These results contribute to call into question the attainability of the ideal of a value-free science.

The entire article is here.

Thursday, June 25, 2015

Why do humans reason? Arguments for an argumentative theory

By Hugo Mercier and Dan Sperber
Behavioral and Brain Sciences (2011) 34, 57 –111
doi:10.1017/S0140525X10000968

Abstract:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought.  Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade.  Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias.  This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to
persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.

The entire article is here.

Monday, June 22, 2015

The Attack on Truth

We have entered an age of willful ignorance

By Lee McIntyre
The Chronicle of Higher Education
Originally published June 8, 2015

To see how we treat the concept of truth these days, one might think we just don’t care anymore. Politicians pronounce that global warming is a hoax. An alarming number of middle-class parents have stopped giving their children routine vaccinations, on the basis of discredited research. Meanwhile many commentators in the media — and even some in our universities — have all but abandoned their responsibility to set the record straight. (It doesn’t help when scientists occasionally have to retract their own work.)

Humans have always held some wrongheaded beliefs that were later subject to correction by reason and evidence. But we have reached a watershed moment, when the enterprise of basing our beliefs on fact rather than intuition is truly in peril.

It’s not just garden-variety ignorance that periodically appears in public-opinion polls that makes us cringe or laugh. A 2009 survey by the California Academy of Sciences found that only 53 percent of American adults knew how long it takes for Earth to revolve around the sun. Only 59 percent knew that the earliest humans did not live at the same time as the dinosaurs.

The entire article is here.

Friday, June 12, 2015

Confirmation Bias and the Limits of Human Knowledge

By Peter Wehner
Commentary Magazine
Originally published May 27, 2015

Here is an excerpt:

Confirmation bias is something we can easily identify in others but find very difficult to detect in ourselves. (If you finish this piece thinking only of the blindness of those who disagree with you, you are proving my point.) And while some people are far more prone to it than others, it’s something none of us is fully free of. We all hold certain philosophical assumptions, whether we’re fully aware of them or not, and they create a prism through which we interpret events. Often those assumptions are not arrived at through empiricism; they are grounded in moral intuitions. And moral intuitions, while not sub-rational, are shaped by things other than facts and figures. “The heart has its reasons which reason itself does not know,” Pascal wrote. And often the heart is right.

Without such core intuitions, we could not hope to make sense of the world. But these intuitions do not stay broad and implicit: we use them to make concrete judgments in life. The consequences of those judgments offer real-world tests of our assumptions, and if we refuse to learn from the results then we have no hope of improving our judgments in the future.

The entire article is here.