Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Statistics. Show all posts
Showing posts with label Statistics. Show all posts

Wednesday, September 15, 2021

Why Is It So Hard to Be Rational?

Joshua Rothman
The New Yorker
Originally published 16 Aug 21

Here is an excerpt:

Knowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge. Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps. The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes, an eighteenth-century mathematician and minister. So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.

There are many ways to explain Bayesian reasoning—doctors learn it one way and statisticians another—but the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preexisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.

Consider the example of a patient who has tested positive for breast cancer—a textbook case used by Pinker and many other rationalists. The stipulated facts are simple. The prevalence of breast cancer in the population of women—the “base rate”—is one per cent. When breast cancer is present, the test detects it ninety per cent of the time. The test also has a false-positive rate of nine per cent: that is, nine per cent of the time it delivers a positive result when it shouldn’t. Now, suppose that a woman tests positive. What are the chances that she has cancer?

When actual doctors answer this question, Pinker reports, many say that the woman has a ninety-per-cent chance of having it. In fact, she has about a nine-per-cent chance. The doctors have the answer wrong because they are putting too much weight on the new information (the test results) and not enough on what they knew before the results came in—the fact that breast cancer is a fairly infrequent occurrence. To see this intuitively, it helps to shuffle the order of your facts, so that the new information doesn’t have pride of place. Start by imagining that we’ve tested a group of a thousand women: ten will have breast cancer, and nine will receive positive test results. Of the nine hundred and ninety women who are cancer-free, eighty-nine will receive false positives. Now you can allow yourself to focus on the one woman who has tested positive. To calculate her chances of getting a true positive, we divide the number of positive tests that actually indicate cancer (nine) by the total number of positive tests (ninety-eight). That gives us about nine per cent.

Wednesday, August 18, 2021

The Shape of Blame: How statistical norms impact judgments of blame and praise

Bostyn, D. H., & Knobe, J. (2020, April 24). 
https://doi.org/10.31234/osf.io/2hca8

Abstract

For many types of behaviors, whether a specific instance of that behavior is either blame or praiseworthy depends on how much of the behavior is done or how people go about doing it. For instance, for a behavior such as “replying quickly to emails”, whether a specific reply is blame or praiseworthy will depend on the timeliness of that reply. Such behaviors lie on a continuum in which part of the continuum is praiseworthy (replying quickly) and another part of the continuum is blameworthy (replying late). As praise shifts towards blame along such behavioral continua, the resulting blame-praise curve must have a specific shape. A number of questions therefore arise. What determines the shape of that curve? And what determines “the neutral point”, i.e., the point along a behavioral continuum at which people neither blame nor praise? Seven studies explore these issues, focusing specifically on the impact of statistical information, and provide evidence for a hypothesis we call the “asymmetric frequency hypothesis.”

From the Discussion

Asymmetric frequency and moral cognition

The results obtained here appear to support the asymmetric frequency hypothesis. So far, we have summarized this hypothesis as “People tend perceive frequent behaviors as not blameworthy.” But how exactly is this hypothesis best understood?Importantly, the asymmetric frequency effect does not imply that whenever a behavior becomes more frequent, the associated moral judgment will shift towards the neutral. Behaviors that are considered to be praiseworthy do not appear to become more neutral simply because they become more frequent. The effect of frequency only appears to occur when a behavior is blameworthy, which is why we dubbed it an asymmetric effect.An enlightening historical example in this regard is perhaps the “gay revolution” (Faderman, 2015). As knowledge of the rate of homosexuality has spread across society and people have become more familiar with homosexuality within their own communities, moral norms surrounding homosexuality have shifted from hostility to increasing acceptance (Gallup 2019). Crucially, however, those who already lauded others for having a loving homosexual relation did not shift their judgment towards neutral indifference over the same time period. While frequency mitigates blameworthiness, it does not cause a general shift towards neutrality. Even when everyone does the right thing, it does not lose its moral shine.

Wednesday, September 25, 2019

Suicide rates climbing, especially in rural America

Misti Crane
Ohio State News
Originally published September 6, 2019

Suicide is becoming more common in America, an increase most pronounced in rural areas, new research has found.

The study, which appears online today (Sept. 6, 2019) in the journal JAMA Network Open, also highlights a cluster of factors, including lack of insurance and the prevalence of gun shops, that are associated with high suicide rates.

Researchers at The Ohio State University evaluated national suicide data from 1999 to 2016, and provided a county-by-county national picture of the suicide toll among adults. Suicide rates jumped 41 percent, from a median of 15 per 100,000 county residents in the first part of the study to 21.2 per 100,000 in the last three years of the analysis. Suicide rates were highest in less-populous counties and in areas where people have lower incomes and fewer resources. From 2014 through 2016, suicide rates were 17.6 per 100,000 in large metropolitan counties compared with 22 per 100,000 in rural counties.

In urban areas, counties with more gun shops tended to have higher suicide rates. Counties with the highest suicide rates were mostly in Western states, including Colorado, New Mexico, Utah and Wyoming; in Appalachian states including Kentucky, Virginia and West Virginia; and in the Ozarks, including Arkansas and Missouri.

The info is here.

Wednesday, February 27, 2019

How People Judge What Is Reasonable

Kevin P. Tobia
Alabama Law Review, Vol. 70, 293-359 (2018)

Abstract

A classic debate concerns whether reasonableness should be understood statistically (e.g., reasonableness is what is common) or prescriptively (e.g., reasonableness is what is good). This Article elaborates and defends a third possibility. Reasonableness is a partly statistical and partly prescriptive “hybrid,” reflecting both statistical and prescriptive considerations. Experiments reveal that people apply reasonableness as a hybrid concept, and the Article argues that a hybrid account offers the best general theory of reasonableness.

First, the Article investigates how ordinary people judge what is reasonable. Reasonableness sits at the core of countless legal standards, yet little work has investigated how ordinary people (i.e., potential jurors) actually make reasonableness judgments. Experiments reveal that judgments of reasonableness are systematically intermediate between judgments of the relevant average and ideal across numerous legal domains. For example, participants’ mean judgment of the legally reasonable number of weeks’ delay before a criminal trial (ten weeks) falls between the judged average (seventeen weeks) and ideal (seven weeks). So too for the reasonable number of days to accept a contract offer, the reasonable rate of attorneys’ fees, the reasonable loan interest rate, and the reasonable annual number of loud events on a football field in a residential neighborhood. Judgment of reasonableness is better predicted by both statistical and prescriptive factors than by either factor alone.

This Article uses this experimental discovery to develop a normative view of reasonableness. It elaborates an account of reasonableness as a hybrid standard, arguing that this view offers the best general theory of reasonableness, one that applies correctly across multiple legal domains. Moreover, this hybrid feature is the historical essence of legal reasonableness: the original use of the “reasonable person” and the “man on the Clapham omnibus” aimed to reflect both statistical and prescriptive considerations. Empirically, reasonableness is a hybrid judgment. And normatively, reasonableness should be applied as a hybrid standard.

The paper is here.

Friday, November 9, 2018

Why Do Christian Women Continue to Have Abortions?

Marvin G. Thompson
The Christian Post
Originally posted November 3, 2018

Here is an excerpt:

According to Abortion Statistics compiled by the Antiochian Orthodox Christian Archdiocese of North America, '"Women identifying themselves as Protestants obtain 37.4% of all abortions in the U.S.; Catholic women account for 31.3%, Jewish women account for 1.3%, and women with no religious affiliation obtain 23.7% of all abortions. 18% of all abortions are performed on women who identify themselves as "Born-again/Evangelical."'

It is significant to note that only 23.7% of women obtaining abortions are not religious. That means 76.3% of all abortions are obtained by "God-fearing" women – with 68.7% identified as Christian women; and 18% of all abortions are obtained by "born-again/evangelical" women.

The official stated position of the Church does not seem to translate to requisite practice by church-going Christians. That fact was recently borne out in a study Commissioned by Care Net showing that 4 in 10 women having an abortion are churchgoers. In that study it is shown that in a survey of 1,038 women having an abortion, "70 percent claim a Christian religious preference, and 43 percent report attending church monthly or more at the time of an abortion."

The info is here.

Wednesday, April 25, 2018

The Peter Principle: Promotions and Declining Productivity

Edward P. Lazear
Hoover Institution and Graduate School of Business
Revision 10/12/00

Abstract

Many have observed that individuals perform worse after having received a promotion. The
most famous statement of the idea is the Peter Principle, which states that people are promoted to
their level of incompetence. There are a number of possible explanations. Two are explored. The
most traditional is that the prospect of promotion provides incentives which vanish after the
promotion has been granted; thus, tenured faculty slack off. Another is that output as a statistical
matter is expected to fall. Being promoted is evidence that a standard has been met. Regression
to the mean implies that future productivity will decline on average. Firms optimally account for the
regression bias in making promotion decisions, but the effect is never eliminated. Both explanations
are analyzed. The statistical point always holds; the slacking off story holds only under certain
compensation structures.

The paper is here.

Monday, September 11, 2017

Do’s and Don’ts for Media Reporting on Suicide

David Susman
The Mental Health and Wellness Blog
Originally published June 15, 2017

Here is an excerpt:

I was reminded recently of the excellent resources which provide guidelines for the responsible reporting and discussion of suicide in the media. In the guideline document, “Recommendations for Reporting on Suicide,” several useful and concrete guidelines are offered for how to talk about suicide in the media. Most of the material in this article comes from this source. Let’s first review and summarize the list of do’s and don’ts.

1) Don’t use big or sensationalistic headlines with specific details about the method of suicide. Do inform without sensationalizing the suicide and without providing details in the headline.

2) Don’t include photos or videos of the location or method of death, grieving family or friends, funerals. Do use a school or work photo; include suicide hotline numbers or local crisis contacts.

3) Don’t describe suicide as “an epidemic,” “skyrocketing,” or other exaggerated terms. Do use accurate words such as “higher rates” or “rising.”

4) Don’t describe a suicide as “without warning” or “inexplicable.” Do convey that people exhibit warning signs of suicide and include a list of common warning signs and ways to intervene when someone is suicidal (see section below).

5) Don’t say “she left a suicide note saying…” Do say “a note from the deceased was found.”

6) Don’t investigate and report on suicide as though it is a crime. Do report on suicide as a public health issue.

7) Don’t quote police or first responders about the causes of suicide. Do seek advice and information from suicide prevention experts.

8) Don’t refer to suicide as “successful,” “unsuccessful,” or a “failed attempt.” Avoid the use of “committed suicide,” which is an antiquated reference to when suicidal acts or attempts were punished as crimes. Do say “died by suicide,” “completed” or “killed him/herself.”

The article is here.

Wednesday, January 11, 2017

The Empathy Trap

By Peter Singer
The Project Syndicate
Originally published December 12, 2016

Here is an excerpt:

“One death is tragedy; a million is a statistic.” If empathy makes us too favorable to individuals, large numbers numb the feelings we ought to have. The Oregon-based nonprofit Decision Research has recently established a website, ArithmeticofCompassion.org, aimed at enhancing our ability to communicate information about large-scale problems without giving rise to “numerical numbness.” In an age in which vivid personal stories go viral and influence public policy, it’s hard to think of anything more important than helping everyone to see the larger picture.

To be against empathy is not to be against compassion. In one of the most interesting sections of Against Empathy, Bloom describes how he learned about differences between empathy and compassion from Matthieu Ricard, the Buddhist monk sometimes described as “the happiest man on earth.” When the neuroscientist Tania Singer (no relation to me) asked Ricard to engage in “compassion meditation” while his brain was being scanned, she was surprised to see no activity in the areas of his brain normally active when people empathize with the pain of others. Ricard could, on request, empathize with others’ pain, but he found it unpleasant and draining; by contrast, he described compassion meditation as “a warm positive state associated with a strong pro-social motivation.”

The article is here.

Tuesday, November 1, 2016

The problem with p-values

David Colquhoun
aeon.co
Originally published October 11, 2016

Here is an excerpt:

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what would be expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

Monday, September 5, 2016

Are There Still Too Few Suicides to Generate Public Outrage?

Lytle MC, Silenzio VB, Caine ED.
JAMA Psychiatry. Published online August 17, 2016.
doi:10.1001/jamapsychiatry.2016.1736.

Suicide is the 10th leading cause of death in the United States, with the overall rate increasing 28.2% since 1999, driven by a 35.3% increase in suicides among persons 35 to 64 years of age.1 Suicides surpassed road traffic deaths in 2009, and the 42 773 suicides reported were more than double the 16 324 homicides in 2014. When coupled with deaths from other deliberate behaviors, research suggests that the mortality from self-directed injury exceeds 70 000 lives, making it the eighth leading cause of death while the death rates of cardiovascular diseases (CVDs), cancers, and human immunodeficiency virus (HIV)/AIDS continue to decrease.

The entire piece is here.

Tuesday, April 12, 2016

Most People Think Watching Porn Is Morally Wrong

By Emma Green
The Atlantic
Originally posted March 6, 2016

Here is an excerpt:

Recent debates about the porn industry haven't seemed to take this ambivalence into account. A Duke University freshman starred in hardcore porn videos and took to the blogs to defend her right to do so. Editorials about Britain's new Internet porn filter have focused on the government's right to regulate the web. Both of these are compelling and understandable points of concern, but they hinge on this issue of rights: The right to voluntarily work in the erotica industry without harassment, the right to enjoy sex work, the right to watch porn without interrogation from your government.

These are all valid issues. But even if 18-year-olds are free to make sex tapes and middle-aged men are free to watch them without Big Brother's scrutiny, there is a lingering moral question: Is watching porn a good thing to do?

The article is here.

Note: Some of these statistics in this article are fascinating.

Thursday, June 18, 2015

Editorial retraction

By Marcia McNutt
Science Magazine
Originally posted on May 28, 2015

Science, with the concurrence of author Donald P. Green, is retracting the 12 December 2014 Report “When contact changes minds: An experiment on transmission of support for gay equality” by LaCour and Green.

The reasons for retracting the paper are as follows: (i) Survey incentives were misrepresented. To encourage participation in the survey, respondents were claimed to have been given cash payments to enroll, to refer family and friends, and to complete multiple surveys. In correspondence received from Michael J. LaCour’s attorney, he confirmed that no such payments were made. (ii) The statement on sponsorship was false. In the Report, LaCour acknowledged funding from the Williams Institute, the Ford Foundation, and the Evelyn and Walter Haas Jr. Fund. Per correspondence from LaCour’s attorney, this statement was not true.

In addition to these known problems, independent researchers have noted certain statistical irregularities in the responses (2). LaCour has not produced the original survey data from which someone else could independently confirm the validity of the reported findings.

Michael J. LaCour does not agree to this Retraction.

Published online 28 May 2015

10.1126/science.aac6638

Here is the article

When contact changes minds: An experiment on transmission of support for gay equality
Michael J. LaCour and Donald P. Green
Science 12 December 2014: 1366-1369.

Wednesday, June 10, 2015

I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How.

By John Bohannon
i09
Originally published May 27, 2015

Here is an excerpt:

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.

Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.

Whenever you hear that phrase, it means that some result has a small p value. The letter p seems to have totemic power, but it’s just a way to gauge the signal-to-noise ratio in the data. The conventional cutoff for being “significant” is 0.05, which means that there is just a 5 percent chance that your result is a random fluctuation. The more lottery tickets, the better your chances of getting a false positive. So how many tickets do you need to buy?

The whole article on the scam research that fooled millions is here.

Friday, December 26, 2014

Science, Trust And Psychology In Crisis

By Tania Lombrozo
NPR
Originally published June 2, 2014

Here is an excerpt:

Researchers who engage in p-diligence are those who engage in practices — such as additional analyses or even experiments — designed to evaluate the robustness of their results, whether or not these practices make it into print. They might, for example, analyze their data with different exclusion criteria — not to choose the criterion that makes some effect most dramatic but to make sure that any claims in the paper don't depend on this potentially arbitrary decision. They might analyze the data using two statistical methods — not to choose the single one that yields a significant result but to make sure that they both do. They might build in checks for various types of human errors and analyze uninteresting aspects of the data to make sure there's nothing weird going on, like a bug in their code.

If these additional data or analyses reveal anything problematic, p-diligent researchers will temper their claims appropriately, or pursue further investigation as needed. And they'll engage in these practices with an eye toward avoiding potential pitfalls, such as confirmation bias and the seductions of p-hacking, that could lead to systematic errors. In other words, they'll "do their p-diligence" to make sure that they — and others — should invest in their claims.

P-hacking and p-diligence have something in common: Both involve practices that aren't fully reported in publication. As a consequence, they widen the gap. But let's face it: While the gap can (and sometimes should) be narrowed, it cannot be closed.

The entire article is here.

Thanks to Ed Zuckerman for this lead.

Saturday, December 13, 2014

If Everything Is Getting Better, Why Do We Remain So Pessimistic?

By the Cato Institute

Featuring Steven Pinker, Johnstone Family Professor of Psychology, Harvard University; with comments by Brink Lindsey, Vice President for Research, Cato Institute; and Charles Kenny, Senior Fellow, Center for Global Development

Originally posted November 19, 2014

Evidence from academic institutions and international organizations shows dramatic improvements in human well-being. These improvements are especially striking in the developing world. Unfortunately, there is often a wide gap between reality and public perceptions, including that of many policymakers, scholars in unrelated fields, and intelligent lay persons. To make matters worse, the media emphasizes bad news, while ignoring many positive long-term trends. Please join us for a discussion of psychological, physiological, cultural, and other social reasons for the persistence of pessimism in the age of growing abundance.

The video and audio can be seen or downloaded here.

Editor's note: This video is important to psychologists to show cultural trends and beliefs that may be perpetrated by media hype.  This panel also highlights cognitive distortions, well being, and positive macro trends.  If you can, watch the first presenter, Dr. Steven Pinker.  If nothing else, you may feel a little better after watching the video.

Saturday, September 27, 2014

New Record Highs in Moral Acceptability

Premarital sex, embryonic stem cell research, euthanasia growing in acceptance

by Rebecca Riffkin
Gallup Politics
Originally posted on May 30, 2014

The American public has become more tolerant on a number of moral issues, including premarital sex, embryonic stem cell research, and euthanasia. On a list of 19 major moral issues of the day, Americans express levels of moral acceptance that are as high or higher than in the past on 12 of them, a group that also encompasses social mores such as polygamy, having a child out of wedlock, and divorce.





Friday, July 4, 2014

18 Things White People Should Know/Do Before Discussing Racism

By Tiffanie Drayton and Joshua McCarther
www.thefrisky.com
Originally posted June 12, 2014

Discussions about racism should be all-inclusive and open to people of all skin colors. However, to put it simply, sometimes White people lack the experience or education that can provide a rudimentary foundation from which a productive conversation can be built. This is not necessarily the fault of the individual, but pervasive myths and misinformation have dominated mainstream racial discourse and often times, the important issues are never highlighted. For that reason, The Frisky has decided to publish this handy list that has some basic rules and information to better prepare anyone for a worthwhile discussion about racism.

1. It is uncomfortable to talk about racism. It is more uncomfortable to live it.

2. “Colorblindness” is a cop-out. The statements “but I don’t see color” or “I never care about color” do not help to build a case against systemic racism. Try being the only White person in an environment. You will notice color then.

The rest of the article is here.

Sunday, June 22, 2014

Mental Suffering and the DSM-5

By Stijn Vanheule
DxSummit.org
Originally published June 3, 2014

In his writings on the topic of diagnosis, the French philosopher and physician Georges Canguilhem makes a crucial distinction between pathology and abnormality, thus paving the way for the studies of his student Michel Foucault on the topics of psychiatric power and biopolitics. In Canguilhem’s view, decision making about normality and abnormality is generally based on two factors. One starts from the observation that there is variability in the ways human beings function: individuals present with a variety of behaviours just as their mental life is characterized by a variety of beliefs and experiences, of which some are more prevalent than others. Then, a judgment is made about (ab-)normality; this tends to be based on a norm or standard against which all behaviours are evaluated and considered as deviant or not.

At this level, two possibilities open: a judgement is made based on either psychosocial criteria or statistical norms.

The entire article is here.

Thursday, May 15, 2014

The Reformation: Can Social Scientists Save Themselves?

By Jerry Adler
Pacific Standard: The Science of Society
Originally posted April 28, 2014

Here is two excerpts to a long, yet exceptional, article on research in the social sciences:

OUTRIGHT FAKERY IS CLEARLY more common in psychology and other sciences than we’d like to believe. But it may not be the biggest threat to their credibility. As the journalist Michael Kinsley once said of wrongdoing in Washington, so too in the lab: “The scandal is what’s legal.” The kind of manipulation that went into the “When I’m Sixty-Four” paper, for instance, is “nearly universally common,” Simonsohn says. It is called “p-hacking,” or, more colorfully, “torturing the data until it confesses.”

P is a central concept in statistics: It’s the mathematical factor that mediates between what happens in the laboratory and what happens in the real world. The most common form of statistical analysis proceeds by a kind of backwards logic: Technically, the researcher is trying to disprove the “null hypothesis,” the assumption that the condition under investigation actually makes no difference.

(cut)

WHILE IT IS POSSIBLE to detect suspicious patterns in scientific data from a distance, the surest way to find out whether a study’s findings are sound is to do the study all over again. The idea that experiments should be replicable, producing the same results when run under the same conditions, was identified as a defining feature of science by Roger Bacon back in the 13th century. But the replication of previously published results has rarely been a high priority for scientists, who tend to regard it as grunt work. Journal editors yawn at replications. Honors and advancement in science go to those who publish new, startling results, not to those who confirm—or disconfirm—old ones.

The entire article is here.

Wednesday, April 16, 2014

Statistical Flaw Punctuates Brain Research in Elite Journals

By Gary Stix
Scientific American
Originally published March 27, 2014

Here is an excerpt:

That is the message of a new analysis in Nature Neuroscience that shows that more than half of 314 articles on neuroscience in elite journals   during an 18-month period failed to take adequate measures to ensure that statistically significant study results were not, in fact, erroneous. Consequently, at  least some of the results from papers in journals like Nature, Science, Nature Neuroscience and Cell were likely to be false positives, even after going through the arduous peer-review gauntlet.

The entire article is here.