Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Research Methods. Show all posts
Showing posts with label Research Methods. Show all posts

Monday, January 11, 2016

A Fight for the Soul of Science

By Natalie Wolchover
Quanta Magazine
Originally published December 16, 2015

Here are two excerpts:

Critics accuse string theory and the multiverse hypothesis, as well as cosmic inflation — the leading theory of how the universe began — of falling on the wrong side of Popper’s line of demarcation. To borrow the title of the Columbia University physicist Peter Woit’s 2006 book on string theory, these ideas are “not even wrong,” say critics. In their editorial, Ellis and Silk invoked the spirit of Popper: “A theory must be falsifiable to be scientific.”

(cut)

Nowadays, as several philosophers at the workshop said, Popperian falsificationism has been supplanted by Bayesian confirmation theory, or Bayesianism, a modern framework based on the 18th-century probability theory of the English statistician and minister Thomas Bayes. Bayesianism allows for the fact that modern scientific theories typically make claims far beyond what can be directly observed — no one has ever seen an atom — and so today’s theories often resist a falsified-unfalsified dichotomy. Instead, trust in a theory often falls somewhere along a continuum, sliding up or down between 0 and 100 percent as new information becomes available. “The Bayesian framework is much more flexible” than Popper’s theory, said Stephan Hartmann, a Bayesian philosopher at LMU. “It also connects nicely to the psychology of reasoning.”

The entire article is here.

Wednesday, June 10, 2015

I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How.

By John Bohannon
i09
Originally published May 27, 2015

Here is an excerpt:

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.

Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.

Whenever you hear that phrase, it means that some result has a small p value. The letter p seems to have totemic power, but it’s just a way to gauge the signal-to-noise ratio in the data. The conventional cutoff for being “significant” is 0.05, which means that there is just a 5 percent chance that your result is a random fluctuation. The more lottery tickets, the better your chances of getting a false positive. So how many tickets do you need to buy?

The whole article on the scam research that fooled millions is here.

Saturday, June 14, 2014

Psychological Science's Replicability Crisis and What It Means for Science in the Courtroom

By Jason Michael Chin
Journal of Psychology, Public Policy, and Law (Forthcoming)

Abstract:  
 
In response to what has been termed the “replicability crisis,” great changes are currently under way in how science is conducted and disseminated. Indeed, major journals are changing the way in which they evaluate science. Therefore, a question arises over how such change impacts law’s treatment of scientific evidence. The present standard for the admissibility of scientific evidence in federal courts asks judges to play the role of gatekeeper, determining if the proffered evidence conforms with several indicia of scientific validity. The alternative legal framework, and one still used by several state courts, requires judges to simply evaluate whether a scientific finding or practice is generally accepted within science.

This Essay suggests that as much as the replicability crisis has highlighted serious issues in the scientific process, it has should have similar implications and actionable consequences for legal practitioners and academics. In particular, generally accepted scientific practices have frequently lagged behind prescriptions for best practices, which in turn affected the way science has been reported and performed. The consequence of this phenomenon is that judicial analysis of scientific evidence will still be impacted by deficient generally accepted practices. The Essay ends with some suggestions to help ensure that legal decisions are influenced by science’s best practices.

Download the essay here.