Originally published 5 Mar 19
Here is an excerpt:
To see this, consider an analogy. Imagine we are testing a drug for weight loss. For every 100 subjects in the drug group, three subjects lose one kilogramme and 97 subjects gain five kilos. For every 100 subjects in the placebo group, two lose four kilos and 98 subjects do not gain or lose any weight. How effective is the drug for weight loss? The odds ratio of weight loss is 1.5, and yet this number tells us nothing about how much weight people on average gain or lose – indeed, the number entirely conceals the real effects of the drug. Though this is an extreme analogy, it shows how cautious we must be when interpreting this celebrated meta-analysis. Unfortunately, however, in response to this work, many leading psychiatrists celebrated, and news headlines misleadingly claimed ‘The drugs do work.’ On the winding route from the hard work of these researchers to the news reports where you were most likely to hear about that study, a simple number became a lie.
When analysed properly, the best evidence indicates that antidepressants are not clinically beneficial. The meta-analyses worth considering, such as the one above, involve attempts to gather evidence from all trials on antidepressants, including those that remain unpublished. Of course it is impossible to know that a meta-analysis includes all unpublished evidence, because publication bias is characterised by deception, either inadvertent or wilful. Nevertheless, these meta-analyses are serious attempts to address publication bias by finding as much data as possible. What, then, do they show?
In meta-analyses that include as much of the evidence as possible, the severity of depression among subjects who receive antidepressants goes down by approximately two points compared with subjects who receive a placebo. Two points. Remember, a depression score can go down by double that amount simply if a subject stops fidgeting. This result, found by both champions and critics of antidepressants, has been replicated year after year for more than a decade (see, for example, the meta-analyses led by Irving Kirsch in 2008, by J C Fournier in 2010, and by Janus Christian Jakobsen in 2017). The phenomena of blind-breaking, the placebo effect and unresolved publication bias could easily account for this trivial two-point reduction in severity scores.