Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label replication. Show all posts
Showing posts with label replication. Show all posts

Sunday, March 3, 2024

Is Dan Ariely Telling the Truth?

Tom Bartlett
The Chronicle of Higher Ed
Originally posted 18 Feb 24

Here is an excerpt:

In August 2021, the blog Data Colada published a post titled “Evidence of Fraud in an Influential Field Experiment About Dishonesty.” Data Colada is run by three researchers — Uri Simonsohn, Leif Nelson, and Joe Simmons — and it serves as a freelance watchdog for the field of behavioral science, which has historically done a poor job of policing itself. The influential field experiment in question was described in a 2012 paper, published in the Proceedings of the National Academy of Sciences, by Ariely and four co-authors. In the study, customers of an insurance company were asked to report how many miles they had driven over a period of time, an answer that might affect their premiums. One set of customers signed an honesty pledge at the top of the form, and another signed at the bottom. The study found that those who signed at the top reported higher mileage totals, suggesting that they were more honest. The authors wrote that a “simple change of the signature location could lead to significant improvements in compliance.” The study was classic Ariely: a slight tweak to a system that yields real-world results.

But did it actually work? In 2020, an attempted replication of the effect found that it did not. In fact, multiple attempts to replicate the 2012 finding all failed (though Ariely points to evidence in a recent, unpublished paper, on which he is a co-author, indicating that the effect might be real). The authors of the attempted replication posted the original data from the 2012 study, which was then scrutinized by a group of anonymous researchers who found that the data, or some of it anyway, had clearly been faked. They passed the data along to the Data Colada team. There were multiple red flags. For instance, the number of miles customers said they’d driven was unrealistically uniform. About the same number of people drove 40,000 miles as drove 500 miles. No actual sampling would look like that — but randomly generated data would. Two different fonts were used in the file, apparently because whoever fudged the numbers wasn’t being careful.

In short, there is no doubt that the data were faked. The only question is, who did it?

This article discusses an investigation into the research conduct of Dr. Dan Ariely, a well-known behavioral economist at Duke University. The investigation, prompted by concerns about potential data fabrication, concluded that while no evidence of fabricated data was found, Ariely did commit research misconduct by failing to adequately vet findings and maintain proper records.

The article highlights several specific issues identified by the investigation, including inconsistencies in data and a lack of supporting documentation for key findings. It also mentions that Ariely made inaccurate statements about his personal history, such as misrepresenting his age at the time of a childhood accident.

While Ariely maintains that he did not intentionally fabricate data and attributes the errors to negligence and a lack of awareness, the investigation's findings have damaged his reputation and raised questions about the integrity of his research. The article concludes by leaving the reader to ponder whether Ariely's transgressions can be forgiven or if they represent a deeper pattern of dishonesty.

It's important to note that the article presents one perspective on a complex issue and doesn't offer definitive answers. Further research and analysis are necessary to form a complete understanding of the situation.

Friday, January 12, 2024

Out, damned spot: Can the “Macbeth Effect” be replicated?

Earp, B. D., Everett, J. A. C., et al. (2014).
Basic and Applied Social Psychology, 36(1), 91–98.


Comments on an article by Zhong, and Liljenquist (see record 2004-22267-003). Zhong and Liljenquist (2006) reported evidence of a “Macbeth Effect” in social psychology: a threat to people's moral purity leads them to seek, literally, to cleanse themselves. In an attempt to build upon these findings, we conducted a series of direct replications of Study 2 from Z&L's seminal report. We used Z&L's original materials and methods, investigated samples that were more representative of the general population, investigated samples from different countries and cultures, and substantially increased the power of our statistical tests. Despite multiple good-faith efforts, however, we were unable to detect a “Macbeth Effect” in any of our experiments. We discuss these findings in the context of recent concerns about replicability in the field of experimental social psychology.

Here is my summary:

In a seminal study published in 2006, Zhong and Liljenquist introduced the concept of the "Macbeth Effect," which suggests that moral transgressions lead to a desire for physical cleansing. This phenomenon was inspired by Shakespeare's play "Macbeth," in which Lady Macbeth's obsession with washing her hands reflects her guilt over her murderous actions.

Building on Zhong and Liljenquist's work, Earp et al. (2014) conducted a series of experiments to replicate the Macbeth Effect. They used various methods, including manipulating participants' moral states through writing tasks and exposing them to reminders of moral cleanliness. However, despite their efforts, they were unable to consistently find evidence for the Macbeth Effect.

The authors' inability to replicate the original findings raises questions about the robustness of the Macbeth Effect. They suggest that more research is needed to understand the conditions under which moral transgressions lead to a desire for physical cleansing. Additionally, they emphasize the importance of conducting replications in psychological research to ensure the reliability of findings.

Saturday, September 9, 2023

Academics Raise More Than $315,000 for Data Bloggers Sued by Harvard Business School Professor Gino

Neil H. Shah & Claire Yuan
The Crimson
Originally published 1 Sept 23

A group of academics has raised more than $315,000 through a crowdfunding campaign to support the legal expenses of the professors behind data investigation blog Data Colada — who are being sued for defamation by Harvard Business School professor Francesca Gino.

Supporters of the three professors — Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons — launched the GoFundMe campaign to raise funds for their legal fees after they were named in a $25 million defamation lawsuit filed by Gino last month.

In a series of four blog posts in June, Data Colada gave a detailed account of alleged research misconduct by Gino across four academic papers. Two of the papers were retracted following the allegations by Data Colada, while another had previously been retracted in September 2021 and a fourth is set to be retracted in September 2023.

Organizers wrote on GoFundMe that the fundraiser “hit 2,000 donors and $250K in less than 2 days” and that Simonsohn, Nelson, and Simmons “are deeply moved and grateful for this incredible show of support.”

Simine Vazire, one of the fundraiser’s organizers, said she was “pleasantly surprised” by the reaction throughout academia in support of Data Colada.

“It’s been really nice to see the consensus among the academic community, which is strikingly different than what I see on LinkedIn and the non-academic community,” she said.

Elisabeth M. Bik — a data manipulation expert who also helped organize the fundraiser — credited the outpouring of financial support to solidarity and concern among scientists.

“People are very concerned about this lawsuit and about the potential silencing effect this could have on people who criticize other people’s papers,” Bik said. “I think a lot of people want to support Data Colada for their legal defenses.”

Andrew T. Miltenberg — one of Gino’s attorneys — wrote in an emailed statement that the lawsuit is “not an indictment on Data Colada’s mission.”

Wednesday, March 1, 2023

Cognitive Control Promotes Either Honesty or Dishonesty, Depending on One's Moral Default

Speer, S. P., Smidts, A., & Boksem, M. A. S. (2021).
The Journal of Neuroscience, 41(42), 8815–8825. 


Cognitive control is crucially involved in making (dis)honest decisions. However, the precise nature of this role has been hotly debated. Is honesty an intuitive response, or is will power needed to override an intuitive inclination to cheat? A reconciliation of these conflicting views proposes that cognitive control enables dishonest participants to be honest, whereas it allows those who are generally honest to cheat. Thus, cognitive control does not promote (dis)honesty per se; it depends on one's moral default. In the present study, we tested this proposal using electroencephalograms in humans (males and females) in combination with an independent localizer (Stroop task) to mitigate the problem of reverse inference. Our analysis revealed that the neural signature evoked by cognitive control demands in the Stroop task can be used to estimate (dis)honest choices in an independent cheating task, providing converging evidence that cognitive control can indeed help honest participants to cheat, whereas it facilitates honesty for cheaters.

Significance Statement

Dishonesty causes enormous economic losses. To target dishonesty with interventions, a rigorous understanding of the underlying cognitive mechanisms is required. A recent study found that cognitive control enables honest participants to cheat, whereas it helps cheaters to be honest. However, it is evident that a single study does not suffice as support for a novel hypothesis. Therefore, we tested the replicability of this finding using a different modality (EEG instead of fMRI) together with an independent localizer task to avoid reverse inference. We find that the same neural signature evoked by cognitive control demands in the localizer task can be used to estimate (dis)honesty in an independent cheating task, establishing converging evidence that the effect of cognitive control indeed depends on a person's moral default.

From the Discussion section

Previous research has deduced the involvement of cognitive control in moral decision-making through relating observed activations to those observed for cognitive control tasks in prior studies (Greene and Paxton, 2009; Abe and Greene, 2014) or with the help of meta-analytic evidence (Speer et al., 2020) from the Neurosynth platform (Yarkoni et al., 2011). This approach, which relies on reverse inference, must be used with caution because any given brain area may be involved in several different cognitive processes, which makes it difficult to conclude that activation observed in a particular brain area represents one specific function (Poldrack, 2006). Here, we extend prior research by providing more rigorous evidence by means of explicitly eliciting cognitive control in a separate localizer task and then demonstrating that this same neural signature can be identified in the Spot-The-Difference task when participants are exposed to the opportunity to cheat. Moreover, using similarity analysis we provide a direct link between the neural signature of cognitive control, as elicited by the Stroop task, and (dis)honesty by showing that time-frequency patterns of cognitive control demands in the Stroop task are indeed similar to those observed when tempted to cheat in the Spot-The-Difference task. These results provide strong evidence that cognitive control processes are recruited when individuals are tempted to cheat.

Tuesday, July 28, 2020

Does encouraging a belief in determinism increase cheating?

Nadelhoffer, T., and others
(2019, May 3).


A key source of support for the view that challenging people’s beliefs about free will may undermine moral behavior is two classic studies by Vohs and Schooler (2008). These authors reported that exposure to certain prompts suggesting that free will is an illusion increased cheating behavior. In the present paper, we report several attempts to replicate this influential and widely cited work. Over a series of five studies (sample sizes of N = 162, N = 283, N = 268, N = 804, N = 982) (four preregistered) we tested the relationship between (1) anti-free-will prompts and free will beliefs and (2) free will beliefs and immoral behavior. Our primary task was to closely replicate the findings from Vohs and Schooler (2008) using the same or highly similar manipulations and measurements as the ones used in their original studies. Our efforts were largely unsuccessful. We suggest that manipulating free will beliefs in a robust way is more difficult than has been implied by prior work, and that the proposed link with immoral behavior may not be as consistent as previous work suggests.

Thursday, July 23, 2020

“Feeling superior is a bipartisan issue: Extremity (not direction) of political views predicts perceived belief superiority”

Harris, E. A., & Van Bavel, J. J. (2020, May 20).


There is currently a debate in political psychology about whether dogmatism and belief superiority are symmetric or asymmetric across the ideological spectrum. One study found that dogmatism was higher amongst conservatives than liberals, but both conservatives and liberals with extreme attitudes reported higher perceived superiority of beliefs (Toner et al., 2013). In the current study, we conducted a pre-registered direct and conceptual replication of this previous research using a large nationally representative sample. Consistent with prior research, we found that conservatives had higher dogmatism scores than liberals while both conservative and liberal extreme attitudes were associated with higher belief superiority compared to more moderate attitudes. As in the prior research we also found that whether conservative or liberal attitudes were associated with higher belief superiority was topic dependent. Different from prior research, we found that ideologically extreme individuals had higher dogmatism. Implications of these results for theoretical debates in political psychology are discussed.


The current work provides further evidence that conservatives have higher dogmatism scores than liberals while both conservative and liberal extreme attitudes are associated with higher belief superiority (and dogmatism). However, ideological differences in belief superiority vary by topic. Therefore, to assess general differences between liberals and conservatives it is necessary to look across many diverse topics and model the data appropriately. If scholars instead choose to study one topic at a time, any ideological differences they find may say more about the topic than about innate differences between liberals and conservatives.

Friday, June 12, 2020

The science behind human irrationality just passed a huge test

Cathleen O’Grady
Ars Technica
Originally posted 22 May 20

Here are two excerpts:

People don’t approach things like loss and risk as purely rational agents. We weigh losses more heavily than gains. We feel like the difference between 1 percent and 2 percent is bigger than the difference between 50 percent and 51 percent. This observation of our irrationality is one of the most influential concepts in behavioral science: skyscrapers of research have been built on Daniel Kahneman and Amos Tversky’s foundational 1979 paper that first described the paradoxes of how people make decisions when faced with uncertainty.

So when researchers raised questions about the foundations of those skyscrapers, it caused alarm. A large team of researchers set out to check whether the results of Kahneman and Tversky’s crucial paper would replicate if the same experiment were conducted now.

Behavioral scientists can heave a sigh of relief: the original results held up, and robustly. With more than 4,000 participants in 19 countries, nearly every question in the original paper was answered the same way by people today as they were by their 1970s counterparts.


Many of the results in the replication are more moderate than in the original paper. That’s a tendency that has been found in other replications and is probably best explained by the small samples in the original research. Getting accurate results (which often means less extreme results) needs big samples to get a proper read on how people in general behave. Smaller sample sizes were typical of the work at the time, and even today, it’s often hard to justify the effort of starting work on a new question with a huge sample size.

The info is here.

Friday, January 31, 2020

Most scientists 'can't replicate studies by their peers'

Test tubesTom Feilden
Originally posted 22 Feb 17

Here is an excerpt:

The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions.

Sadly nothing, it seems, could be further from the truth.

After meticulous research involving painstaking attention to detail over several years (the project was launched in 2011), the team was able to confirm only two of the original studies' findings.

Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.

"It's worrying because replication is supposed to be a hallmark of scientific integrity," says Dr Errington.

Concern over the reliability of the results published in scientific literature has been growing for some time.

According to a survey published in the journal Nature last summer, more than 70% of researchers have tried and failed to reproduce another scientist's experiments.

Marcus Munafo is one of them. Now professor of biological psychology at Bristol University, he almost gave up on a career in science when, as a PhD student, he failed to reproduce a textbook study on anxiety.

"I had a crisis of confidence. I thought maybe it's me, maybe I didn't run my study well, maybe I'm not cut out to be a scientist."

The problem, it turned out, was not with Marcus Munafo's science, but with the way the scientific literature had been "tidied up" to present a much clearer, more robust outcome.

The info is here.

Tuesday, July 9, 2019

A Waste of 1,000 Research Papers

Ed Yong
The Atlantic
Originally posted May 17, 2019

In 1996, a group of European researchers found that a certain gene, called SLC6A4, might influence a person’s risk of depression.

It was a blockbuster discovery at the time. The team found that a less active version of the gene was more common among 454 people who had mood disorders than in 570 who did not. In theory, anyone who had this particular gene variant could be at higher risk for depression, and that finding, they said, might help in diagnosing such disorders, assessing suicidal behavior, or even predicting a person’s response to antidepressants.

Back then, tools for sequencing DNA weren’t as cheap or powerful as they are today. When researchers wanted to work out which genes might affect a disease or trait, they made educated guesses, and picked likely “candidate genes.” For depression, SLC6A4 seemed like a great candidate: It’s responsible for getting a chemical called serotonin into brain cells, and serotonin had already been linked to mood and depression. Over two decades, this one gene inspired at least 450 research papers.

But a new study—the biggest and most comprehensive of its kind yet—shows that this seemingly sturdy mountain of research is actually a house of cards, built on nonexistent foundations.

Richard Border of the University of Colorado at Boulder and his colleagues picked the 18 candidate genes that have been most commonly linked to depression—SLC6A4 chief among them. Using data from large groups of volunteers, ranging from 62,000 to 443,000 people, the team checked whether any versions of these genes were more common among people with depression. “We didn’t find a smidge of evidence,” says Matthew Keller, who led the project.

The info is here.

Monday, June 24, 2019

Not so Motivated After All? Three Replication Attempts and a Theoretical Challenge to a Morally-Motivated Belief in Free Will

Andrew E. Monroe and Dominic Ysidron


AbstractFree will is often appraised as a necessary input to for holding others morally or legally responsible for misdeeds. Recently, however,Clark and colleagues (2014), argued for the opposite causal relationship. They assert that moral judgments and the desire to punish motivate people’s belief in free will. In three experiments—two exact replications (Studies 1 & 2b) and one close replication(Study 2a)we seek to replicate these findings. Additionally, in a novel experiment (Study 3) we test a theoretical challenge derived from attribution theory, which suggests that immoral behaviors do not uniquely influence free will judgments. Instead, our non-violation model argues that norm deviations, of any kind—good, bad, or strange—cause people to attribute more free will to agents, and attributions of free will are explained via desire inferences.Across replication experiments we found no evidence for the original claim that witnessing immoral behavior causes people to increase their belief in free will, though we did replicate the finding that people attribute more free will to agents who behave immorally compared to a neutral control (Studies 2a & 3). Finally, our novel experiment demonstrated broad support for our norm-violation account, suggesting that people’s willingness to attribute free will to others is malleable, but not because people are motivated to blame.Instead, this experiment shows that attributions of free will are best explained by people’s expectations for norm adherence, and when these expectations are violated people infer that an agent expressed their free will to do so.

From the Discussion Section:

Together these findings argue for a non-moral explanation for free will judgments with norm-violation as the key driver. This account explains people’s tendency to attribute more free will to behaving badly agents because people generally expect others to follow moral norms, and when they don’t, people believe that there must have been a strong desire to perform the behavior. In addition, a norm-violation account is able to explain why people attribute more free will to agents behaving in odd or morally positive ways. Any deviation from what is expected causes people to attribute more desire and choice (i.e., free will)to that agent.Thus our findings suggest that people’s willingness to ascribe free will to others is indeed malleable, but considerations of free will are being driven by basic social cognitive representations of norms, expectations, and desire. Moreover, these data indicate that when people endorse free will for themselves or for others, they are not making claims about broad metaphysical freedom. Instead, if desires and norm-constraints are what affect ascriptions of free will, this suggests that what it means to have (or believe) in free willis to be rational (i.e., making choices informed by desires and preferences) and able to overcome constraints.

A preprint can be found here.

Motivated free will belief: The theory, new (preregistered) studies, and three meta-analyses

Clark, C. J., Winegard, B. M., & Shariff, A. F. (2019).
Manuscript submitted for publication.


Do desires to punish lead people to attribute more free will to individual actors (motivated free will attributions) and to stronger beliefs in human free will (motivated free will beliefs) as suggested by prior research? Results of 14 new (7 preregistered) studies (n=4,014) demonstrated consistent support for both of these. These findings consistently replicated in studies (k=8) in which behaviors meant to elicit desires to punish were rated as equally or less counternormative than behaviors in control conditions. Thus, greater perceived counternormativity cannot account for these effects. Additionally, three meta-analyses of the existing data (including eight vignette types and eight free will judgment types) found support for motivated free will attributions (k=22; n=7,619; r=.25, p<.001) and beliefs (k=27; n=8,100; r=.13, p<.001), which remained robust after removing all potential moral responsibility confounds (k=26; n=7,953; r=.12, p<.001). The size of these effects varied by vignette type and free will belief measurement. For example, presenting the FAD+ free will belief subscale mixed among three other subscales (as in Monroe and Ysidron’s [2019] failed replications) produced a smaller average effect size (r=.04) than shorter and more immediate measures (rs=.09-.28). Also, studies with neutral control conditions produced larger effects (Attributions: r=.30; Beliefs: rs=.14-.16) than those with control conditions involving bad actions (Attributions: r=.05; Beliefs: rs=.04-.06). Removing these two kinds of studies from the meta-analyses produced larger average effect sizes (Attributions: r=.28; Beliefs: rs=.17-.18). We discuss the relevance of these findings for past and future research and the significance of these findings for human responsibility.

From the Discussion Section:

We suspect that motivated free will beliefs have become more common as society has become more humane and more concerned about proportionate punishment. Many people now assiduously reflect upon their own society’s punitive practices and separate those who deserve to be punished from those who are incapable of being fully responsible for their actions. Free will is crucial here because it is often considered a prerequisite for moral responsibility (Nichols & Knobe, 2007; Sarkissian et al., 2010; Shariff et al., 2014). Therefore, when one is motivated to punish another person, one is also motivated to inflate free will beliefs and free will attributions to specific perpetrators as a way to justify punishing the person.

A preprint can be downloaded here.

Saturday, September 15, 2018

Social Science One And How Top Journals View The Ethics Of Facebook Data Research

Kalev Leetaru
Originally posted on August 13, 2018

Here is an excerpt:

At the same time, Social Science One’s decision to leave all ethical questions TBD and to eliminate the right to informed consent or the ability to opt out of research fundamentally redefines what it means to conduct research in the digital era, normalizing the removal of these once sacred ethical tenets. Given the refusal of one of its committee members to provide replication data for his own study and the statement by another committee member that “I have articulated the argument that ToS are not, and should not be considered, ironclad rules binding the activities of academic researchers. … I don't think researchers should reasonably be expected to adhere to such conditions, especially at a time when officially sanctioned options for collecting social media data are disappearing left and right,” the result is an ethically murky landscape in which it is unclear just where Social Science One draws the line at what it will or will not permit.

Given Facebook’s new focus on “privacy first” I asked the company whether it would commit to offering its two billion users a new profile setting allowing them to opt out of having their data made available to academic researchers such as Social Science One. As it has repeatedly done in the past, the company declined to comment.

The info is here.

Wednesday, July 11, 2018

The Lifespan of a Lie

Ben Blum
Originally posted June 7, 2018

Here is an excerpt:

Somehow, neither Prescott’s letter nor the failed replication nor the numerous academic critiques have so far lessened the grip of Zimbardo’s tale on the public imagination. The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

For psychology professors, the Stanford prison experiment is a reliable crowd-pleaser, typically presented with lots of vividly disturbing video footage. In introductory psychology lecture halls, often filled with students from other majors, the counterintuitive assertion that students’ own belief in their inherent goodness is flatly wrong offers dramatic proof of psychology’s ability to teach them new and surprising things about themselves. Some intro psych professors I spoke to felt that it helped instill the understanding that those who do bad things are not necessarily bad people. Others pointed to the importance of teaching students in our unusually individualistic culture that their actions are profoundly influenced by external factors.


But if Zimbardo’s work was so profoundly unscientific, how can we trust the stories it claims to tell? Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. What is unique, and uniquely compelling, about Zimbardo’s narrative of the Stanford prison experiment is its suggestion that all it takes to make us enthusiastic sadists is a jumpsuit, a billy club, and the green light to dominate our fellow human beings.

The article is here.

Monday, June 25, 2018

Why Rich Kids Are So Good at the Marshmallow Test

Jessica McCrory Calarco
The Atlantic
Originally published June 1, 2018

Here is an excerpt:

This new paper found that among kids whose mothers had a college degree, those who waited for a second marshmallow did no better in the long run—in terms of standardized test scores and mothers’ reports of their children’s behavior—than those who dug right in. Similarly, among kids whose mothers did not have college degrees, those who waited did no better than those who gave in to temptation, once other factors like household income and the child’s home environment at age 3 (evaluated according to a standard research measure that notes, for instance, the number of books that researchers observed in the home and how responsive mothers were to their children in the researchers’ presence) were taken into account. For those kids, self-control alone couldn’t overcome economic and social disadvantages.

The failed replication of the marshmallow test does more than just debunk the earlier notion; it suggests other possible explanations for why poorer kids would be less motivated to wait for that second marshmallow. For them, daily life holds fewer guarantees: There might be food in the pantry today, but there might not be tomorrow, so there is a risk that comes with waiting. And even if their parents promise to buy more of a certain food, sometimes that promise gets broken out of financial necessity.

The information is here.

Monday, April 24, 2017

How Flawed Science Is Undermining Good Medicine

Morning Edition
Originally posted April 6, 2017

Here is an excerpt:

A surprising medical finding caught the eye of NPR's veteran science correspondent Richard Harris in 2014. A scientist from the drug company Amgen had reviewed the results of 53 studies that were originally thought to be highly promising — findings likely to lead to important new drugs. But when the Amgen scientist tried to replicate those promising results, in most cases he couldn't.

"He tried to reproduce them all," Harris tells Morning Edition host David Greene. "And of those 53, he found he could only reproduce six."

That was "a real eye-opener," says Harris, whose new book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions explores the ways even some talented scientists go wrong — pushed by tight funding, competition and other constraints to move too quickly and sloppily to produce useful results.

"A lot of what everybody has reported about medical research in the last few years is actually wrong," Harris says. "It seemed right at the time but has not stood up to the test of time."

The impact of weak biomedical research can be especially devastating, Harris learned, as he talked to doctors and patients. And some prominent scientists he interviewed told him they agree that it's time to recognize the dysfunction in the system and fix it.

The article is here.

Monday, March 13, 2017

Why Facts Don't Change Our Minds

Elizabeth Kolbert
The New Yorker
Originally published February 27, 2017

Here is an excerpt:

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coƶperate. Coƶperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The article is here.

Thursday, October 13, 2016

The influence of intention, outcome and question-wording on children’s and adults’ moral judgments

Gavin Nobes, Georgia Panagiotaki, Kimberley J. Bartholomew
Volume 157, December 2016, Pages 190–204


The influence of intention and outcome information on moral judgments was investigated by telling children aged 4–8 years and adults (N = 169) stories involving accidental harms (positive intention, negative outcome) or attempted harms (negative intention, positive outcome) from two studies (Helwig, Zelazo, & Wilson, 2001; Zelazo, Helwig, & Lau, 1996). When the original acceptability (wrongness) question was asked, the original findings were closely replicated: children’s and adults’ acceptability judgments were based almost exclusively on outcome, and children’s punishment judgments were also primarily outcome-based. However, when this question was rephrased, 4–5-year-olds’ judgments were approximately equally influenced by intention and outcome, and from 5–6 years they were based considerably more on intention than outcome primarily intention-based. These findings indicate that, for methodological reasons, children’s (and adults’) ability to make intention-based judgment has often been substantially underestimated.

The article is here.

Tuesday, March 22, 2016

Psychologists Call Out the Study That Called Out the Field of Psychology

By Rachel E. Gross
Originally published March 3, 2016

Remember that study that found that most psychology studies were wrong? Yeah, that study was wrong. That’s the conclusion of four researchers who recently interrogated the methods of that study, which itself interrogated the methods of 100 psychology studies to find that very few could be replicated. (Whoa.) Their damning commentary will be published Friday in the journal Science. (The scientific body that publishes the journal sent Slate an early copy.)

In case you missed the hullabaloo: A key feature of the scientific method is that scientific results should be reproducible—that is, if you run an experiment again, you should get the same results. If you don’t, you’ve got a problem. And a problem is exactly what 270 scientists found last August, when they decided to try to reproduce 100 peer-reviewed journal studies in the field of social psychology. Only around 39 percent of the reproduced studies, they found, came up with similar results to the originals.

The article is here.

Sunday, March 6, 2016

The Unbearable Asymmetry of Bullshit

By Brian Earp
BMJ Blogs
Originally posted February 16, 2016


Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

The blog post is here.

Tuesday, December 29, 2015

AI is different because it lets machines weld the emotional with the physical

By Peter McOwen
The Conversation
Originally published December 10, 2015

Here is an excerpt:

Creative intelligence

However, many are sensitive to the idea of artificial intelligence being artistic – entering the sphere of human intelligence and creativity. AI can learn to mimic the artistic process of painting, literature, poetry and music, but it does so by learning the rules, often from access to large datasets of existing work from which it extracts patterns and applies them. Robots may be able to paint – applying a brush to canvas, deciding on shapes and colours – but based on processing the example of human experts. Is this creating, or copying? (The same question has been asked of humans too.)

The entire article is here.