Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Assimilation. Show all posts
Showing posts with label Assimilation. Show all posts

Wednesday, September 15, 2021

Why Is It So Hard to Be Rational?

Joshua Rothman
The New Yorker
Originally published 16 Aug 21

Here is an excerpt:

Knowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge. Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps. The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes, an eighteenth-century mathematician and minister. So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.

There are many ways to explain Bayesian reasoning—doctors learn it one way and statisticians another—but the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preexisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.

Consider the example of a patient who has tested positive for breast cancer—a textbook case used by Pinker and many other rationalists. The stipulated facts are simple. The prevalence of breast cancer in the population of women—the “base rate”—is one per cent. When breast cancer is present, the test detects it ninety per cent of the time. The test also has a false-positive rate of nine per cent: that is, nine per cent of the time it delivers a positive result when it shouldn’t. Now, suppose that a woman tests positive. What are the chances that she has cancer?

When actual doctors answer this question, Pinker reports, many say that the woman has a ninety-per-cent chance of having it. In fact, she has about a nine-per-cent chance. The doctors have the answer wrong because they are putting too much weight on the new information (the test results) and not enough on what they knew before the results came in—the fact that breast cancer is a fairly infrequent occurrence. To see this intuitively, it helps to shuffle the order of your facts, so that the new information doesn’t have pride of place. Start by imagining that we’ve tested a group of a thousand women: ten will have breast cancer, and nine will receive positive test results. Of the nine hundred and ninety women who are cancer-free, eighty-nine will receive false positives. Now you can allow yourself to focus on the one woman who has tested positive. To calculate her chances of getting a true positive, we divide the number of positive tests that actually indicate cancer (nine) by the total number of positive tests (ninety-eight). That gives us about nine per cent.

Thursday, June 18, 2020

Measuring Information Preferences

E. H. Ho, D. Hagmann, & G. Loewenstein
Management Science
Published Online:13 Mar 2020

Abstract

Advances in medical testing and widespread access to the internet have made it easier than ever to obtain information. Yet, when it comes to some of the most important decisions in life, people often choose to remain ignorant for a variety of psychological and economic reasons. We design and validate an information preferences scale to measure an individual’s desire to obtain or avoid information that may be unpleasant but could improve future decisions. The scale measures information preferences in three domains that are psychologically and materially consequential: consumer finance, personal characteristics, and health. In three studies incorporating responses from over 2,300 individuals, we present tests of the scale’s reliability and validity. We show that the scale predicts a real decision to obtain (or avoid) information in each of the domains as well as decisions from out-of-sample, unrelated domains. Across settings, many respondents prefer to remain in a state of active ignorance even when information is freely available. Moreover, we find that information preferences are a stable trait but that an individual’s preference for information can differ across domains.

General Discussion

Making good decisions is often contingent on obtaining information, even when that
information is uncertain and has the potential to produce unhappiness. Substantial empirical
evidence suggests that people are often ready to make worse decisions in the service of avoiding
potentially painful information. We propose that this tendency to avoid information is a trait that
is separate from those measured previously, and developed a scale to measure it. The scale asks
respondents to imagine how they would respond to a variety of hypothetical decisions involving
information acquisition/avoidance. The predictive validity of the IPS appears to be largely driven
by its domain items, and although it incorporates domain-specific subscales, it appears to be
sufficiently universal to capture preferences for information in a broad range of domains.

The research is here.

We already knew, to some extent, that there are cases where people avoid information.  This is important in psychotherapy in which avoidance promotes confirmatory hypothesis testing, which enhances overconfidence.  We need to help people embrace information that may be inconsistent or incongruent with their worldview.

Wednesday, January 15, 2020

How should we balance morality and the law?

Peter Koch
BCM Blogs
Originally posted 20 Dec 19

I was recently discussing a clinical case with medical students and physicians that involved balancing murky ethical issues and relevant laws. One participant leaned back and said: “Well, if we know the laws, then that’s the end of the story!”

The laws were clear about what ought to (legally) be done, but following the laws in this case would likely produce a bad outcome. We ended up divided about how to proceed with the case, but this discussion raised a bigger question: Exactly how much should we weigh the law in moral deliberations?

The basic distinction between the legal and moral is easy enough to identify. Most people agree that what is legal is not necessarily moral and what is immoral should not necessarily be illegal.

Slavery in the U.S. is commonly used as an example. “Of course,” a good modern citizen will say, “slavery was wrong even when it was legal.” The passing of the 13 amendment did not make slavery morally wrong; it was wrong already, and the legal structures finally caught up to the moral structures.

There are plenty of acts that are immoral but that should not be illegal. For example, perhaps it is immoral to gossip about your friend’s personal life, but most would agree that this sort of gossip should not be outlawed. The basic distinction between the legal and the moral appears to be simple enough.

Things get trickier, though, when we press more deeply into the matter.

The blog post is here.