Saturday October 25 2008
Welcome to nerds’ corner, and yet another small print criticism of a trivial act of borderline dubiousness which will ultimately lead to distorted evidence, irrational decisions, and bad outcomes in what I like to call “the real world”.
So the ClinPsyc blog (clinpsyc.blogspot.com ) has spotted that the drug company Lilly have published identical data on duloxetine – a new-ish antidepressant drug – twice over, in two entirely separate scientific papers.
The first article is from the January 2008 edition of the Journal of Clinical Psychiatry, a study which concludes that the “switch to duloxetine was associated with significant improvements in both emotional and painful physical symptoms of depression”. The second concluded the same thing.
ClinPsyc went through both papers and checked – with the characteristic anality of a practising academic – all the numbers in the data tables, finding that they were both essentially identical. A few different subscales were reported in each paper, and the emphasis in the second is more on pain than depression, but other than that, this is identical data, from two apparently unique papers.
There are several reasons why this is interesting.
Firstly, duplicate publication is a serious problem, because it distorts a reader’s impression of how much evidence is out there. If you think there are two trials showing that something works, then obviously that’s much more impressive than if there’s just one. “Of course I prescribe it”, you can hear the doctors say: “I’ve seen two trials showing that it works”.
I got onto the lead author of the paper, who explained that the second paper expanded more on the “pain” aspects of the results. That is slightly fair enough. He also claimed that the second paper referenced the first. This is true in the strictest sense of the phrase: it did indeed make reference to its existence as a previous experiment, but it gave absolutely no indication that this was the same experiment, and for the reader, without going forensic on the numbers (or going to the extreme lengths of checking the trial registry ID number against that of every previous experiment ever done in the field), there was literally no way to know that the data here was all from that previous study, and largely, simply, reproduced. It looked like two studies. It just did.
And it’s not just about planting duplicate information in prescribers’ memories. Duplicate publication can also distort the results of “meta-analyses”, big studies where the results of lots of trials are brought together into one big spreadsheet. Because then, if you can’t spot what’s duplicated, some evidence is actually counted twice in the numerical results. This is why it is more acceptable to publish duplicates if you at least acknowledge that you have done so. By way of example, I am being clear that I will now rehash a paragraph I wrote several years ago on the work of Dr Martin Tramer.
Martin Tramer was looking at the efficacy of a nausea drug called ondansetron, and noticed that lots of the data in a meta-analysis he was doing seemed to be replicated: the results for many individual patients had been written up several times, in slightly different forms, in apparently different studies, in different journals. Crucially, data which showed the drug in a better light were more likely to be duplicated than the data which showed it to be less impressive, and overall this led to a 23 per cent overestimate of the drug’s efficacy.
But the other thing to notice about this duloxetine experiment – so good they published it twice – is that its design made the Durham fish oil “trial” look like a work of genius. There was no control group, and it simply looks at whether pain improves after swapping to duloxetine from a previous antidepressant (either instantly, or with a gradual cross-over of prescriptions, perhaps to induce a vague sense that one thing is somehow being compared with another).
You don’t need to be a professor of clinical trial methodology to recognise that some peoples’ pain will improve anyway, under those conditions, regardless of what’s in the pill (and regardless of whether prescriptions are tapered into each other), through a combination of the placebo effect, and the fact that sometimes, in fact quite often, things like pain do just get better with time.
And this might have been a worthwhile study to do if you had good grounds to believe that duloxetine really did improve physical pain in depression – as Lilly have claimed for a while – and you just needed to work out the best dosing regime, but sadly a meta-analysis published earlier this year looked at all the evidence for that claim. Its title is “Duloxetine Does Not Relieve Painful Physical Symptoms in Depression: A Meta-Analysis”.
Nobody knows how common duplicate publication is in academia, with a steady trickle reported in the journals, and now a steady trickle being spotted by bloggers. In fact, just two days after ClinPsyc published their story, The MacGuffin (chekhovsgun.blogspot.com/ ) found an identical story around a different drug. Just from mentioning this story this morning I’ve picked up another from my friend Will in the room next door. These are afterthoughts by academics, water cooler comments, but once posted on the internet, the greatest photocopying machine ever invented, they become searchable, and notable, and very slightly embarrassing.