It’s not just about Prozac. Our failure to properly regulate testing in the pharmaceutical industry has devastating costs
Wednesday February 27 2008
Yesterday the journal PLoS Medicine published a study which combined the results of 47 trials on some antidepressant drugs, including Prozac, and found only minimal benefits over placebo, except for the most depressed patients. It has been misreported as a definitive nail in the coffin: this is not true. It was a restricted analysis [see below] but, more importantly, on the question of antidepressants, it added very little. We already knew that SSRIs give only a modest benefit in mild and moderate depression and, indeed, for some time now, the NICE guidelines themselves have actively advised against using them in milder cases since 2004.
But the real story goes way beyond the question of Prozac. This new study — published, ironically, in an open access journal — tells a fascinating story of buried data, and of our collective failure, as a society, over half a century, to adequately regulate the colossal $550bn pharmaceutical industry.
The key issue is simple. In any situation, to make any kind of sensible decision about which treatment is best, a doctor must be able to take into account all of the available information. But drug companies have repeatedly been shown to bury unflattering data.
Sometimes they bury data which shows drugs to be actively harmful. This happened in the case of Vioxx and heart attacks, and SSRIs and suicidal thoughts. Such stories feel, intuitively, like cover ups. But there are also more subtle issues at stake, in the burying of results showing minimal efficacy, and these have only been revealed through the excellent investigative work of medical academics.
One example came just last month. As I reported at the time, a paper in the New England Journal of Medicine dug out a list of all trials on SSRIs which had ever been registered with the Food and Drug Administration, and then went to look for those same trials in the academic literature. There were 37 studies which were assessed by the FDA as positive and, with a single exception, every one of those positive trials was written up, proudly, and published in full. But there were also 33 studies which had negative or iffy results and, of those, 22 were simply not published at all — they were buried — while 11 were written up and published in a way that portrayed them as having a positive outcome.
The new study, published this week, has analysed all of the data from the FDA, using the Freedom of Information Act to obtain the results for some of the trials. That medical academics should need to use that kind of legislation to obtain information about trials on pills which are prescribed to millions of people is absurd. More than that, it breaks a key moral contract between patient and researcher.
When a patient agrees to participate in a clinical trial, they give their consent on the understanding that their information will be used to increase the sum of our knowledge about treatments, to ensure that other people, in the future, will be treated more effectively. Burying unwelcome results is an unambiguous betrayal of their trust and generosity.
And yet we have known about this happening for a long time. The first paper describing “publication bias” — where studies with negative results tend to get forgotten — was in 1959. And there are two very simple and widely accepted solutions, which have been discussed in the academic literature at length since the 80s, but which are still not fully in place.
The first is obvious. Nobody should get ethical approval to perform a clinical trial unless there is a clear undertaking that the results will be published, in full, in a publicly available forum, and that the researchers will have full academic freedom to do so. Any company trying to silence academics should be named and shamed, and even attempting to do so should be a regulatory offence.
That’s the butch solution. But there is also a more elegant one, which is arguably even more important: a compulsory international trials register. Give every trial an ID number, so we can all see that a trial exists, they can’t go quietly missing in action, and we know when and where to look if they do.
The pharmaceutical industry is very imaginative, after all, and registers also help to manage some of the other less obvious ways in which they distort the literature. For example, sometimes companies will publish flattering data two or three times over, in slightly different forms, as if it came from different studies, to make it look as if there are a lot of different positive findings out there: registers make this instantly obvious.
Worse than that, companies often move the goalposts, and change the design of a trial after the results are in, to try and massage the findings. This, again, is impossible when the protocol is registered before a trial begins.
This is just a taste of the tricks of their trade (although I’ve posted a long reading list below if your interest is piqued). Alongside these deep-rooted, systematic problems with the pharmaceutical industry, the single issue of SSRI antidepressants, and these new findings, becomes almost trivial. Biased under-reporting of clinical trials happens in all areas of medicine. It wastes money, and it costs lives. It is unethical, and it is indefensible. But most damningly of all, it could be fixed in a legislative trice.
Ben Goldacre is a medical doctor and writes the Bad Science column in the Guardian. His book Bad Science will be published by 4th Estate later this year.
If this kind of thing interests you I heartily recommend the genuinely brilliant book Testing Treatments by Iain Chalmers, Hazel Thornton, and Imogen Evans. It’s written specifically for a lay audience.
It’s nice to have a book, but the whole thing can also be downloaded unabridged (in English and also in Arabic, helpfully) from here:
For something more advanced, or rather, if you prefer things which are closer to textbooks, then How to Read a Paper: The Basics of Evidence-Based Medicine by Prof Trisha Greenhalgh is a highly readable, industry standard, medical student text. As you are aware, the whole of badscience.net is really just an excuse to bring the joys of evidence based medicine and primary academic literature to the masses, through bitchiness and sarcasm.
The NICE Guidelines say (since 2004):
“Antidepressants in mild depression
• Antidepressants are not recommended for the initial treatment of mild depression, because the risk–benefit ratio is poor.”
One more thing:
It seems to me that the media walk around with big sticky labels marked “good” and “bad”. This meta-analysis is a fascinating bit of work, and it tells a damning story about the pharmaceutical industry’s burying data, but it has also been ridiculously misreported, in the first day of its life.
I’m not going to list all the errors here – perversely – because I know a lot of journalists read this blog, and I want them all to get their stories as wrong as possible so I have the option of writing about them this weekend. Oh, curse my conscience:
1. It was not a study of SSRI antidepressant drugs: neither nefazodone nor venlafaxine are SSRI drugs.
2. It did not look at all the trials ever done on these drugs: it looked only at the trials done before the drugs were licensed (none of them more than six weeks long), and specifically excluded all the trials done after they were licensed. It is common for quacks and journalists to think that the moment of licensing is some kind of definitive “it works” stamp of approval. It’s not, it’s just the beginning of the story of a drugs’ evidence, usually.
3. It did not show that these drugs have no benefit over placebo: it showed that they do have a statistically significant (“measurable”) benefit over placebo, but for mild and moderate depression that benefit was not big enough for most people to consider it clinically significant, ie there was an improvement, but not enough points improvement on a depression rating scale for anyone to get too excited over it.
4. I could go on.
And remember kids, placeboes are amazingly powerful.