Ben Goldacre, Saturday 3 October 2009, The Guardian.
There are some very obvious problems that never seem to go away. Right now I can see 1,592 articles on Google News about one poor girl who died unexpectedly after receiving the cervical vaccine, and only 363 explaining that the post mortem found a massive and previously undiagnosed tumour in her chest. Meanwhile the Daily Mail this week continue their oncological ontology project with the magnificent headline: “Daily dose of housework could cut risk of breast cancer”.
But while the media wound themselves into an emotive frenzy of elaborate conspiracy theories, killer vaccines and industry cover-ups, the real death action was to be found hidden away in bland, dry data. This month the Journal of the American Medical Association quietly published one of the most damning papers to have appeared all year.
We have known for decades that academic publishing faces two serious problems. One is that trials often go missing in action: a drug company might do eight trials of a drug, say, but only two have a positive result. So those two positive trials will appear in an academic journal, while the six remaining with negative results quietly disappear. Bizarrely, regulatory bodies like the FDA get to see this negative data, but often enough doctors do not.
This is a familiar problem, and a murderous one, because overall the results of all 8 trials combined might show that the treatment is ineffective: in the absence of this full information, people are subjected unnecessarily to side effects, and deprived of other more effective treatments.
On top of that, we also know that researchers can mischievously change their stated goal, or “primary outcome”, after their trial has finished. You might do a trial on a blood pressure pill, for example, stating that you will look to see if it can reduce heart attacks, but find at the end that it doesn’t. Then you might retrospectively change the whole purpose of your study, ignore the heart attacks, pretend it was only ever about blood pressure, and glowingly report a reduction in blood pressure as if this was the primary outcome you were always interested in. Or you might measure so many different things that some of them will show up as positive changes simply by chance.
Both of these problems are supposed to have already been fixed by clinical trials registers: before you start your trial, you publish the protocol, saying exactly what your primary outcome is, how many people are in your trial, when it will finish, and so on. Then people can see if your trial has gone missing in action, and we can see if you have misled us, changing your primary outcome, by looking at the protocol and the finished academic paper side by side.
This only works if it is enforced. In 2005, the International Committee of Medical Journal Editors announced they would only publish trials that had been registered. Many journals check initial protocols against finished academic papers. So Sylvain Matthieu and colleagues checked up on the system: they gathered together all of the randomised controlled trials from cardiology, rheumatology, and gastroenterology in the 10 biggest general medical and specialty journals from 2008.
Of these 323 trials, less than half were adequately registered, before the end of the trial, with the primary outcome clearly specified. Trial registration was entirely lacking for 89 trials. This permissiveness means that drug companies know they can get away without registering trials, and so the deaths caused by missing data will continue.
Then they looked more closely at the trials which were properly registered, and found repeated discrepancies between the outcomes stated at registration and the outcomes published in the final paper, in a third of all papers: in almost all the papers where it was possible to assess the switch, duff outcomes were switched out in favour of one that showed a positive finding.
You might find it boring, but our failure to ensure full, undistorted publication of all trial data is the single most important issue in medicine today, because this is the only way we can know if a treatment does good, or harm. The story may be less emotive than one dead teenager, but it costs many more lives, and you should struggle to be angry about it, because the boring regulators we trust to monitor boring problems have repeatedly failed us on this one. Instead, we rely on good will and vague promises, monitored only by an occasional ad hoc analysis from an academic on a whim. This is a broken system. Write 1,592 stories about that.
Mathieu, S. et al., 2009. Comparison of Registered and Published Primary Outcomes in Randomized Controlled Trials. JAMA, 302(9), 977-984.