Listen carefully, I shall say this only once

October 26th, 2008 by Ben Goldacre in academic publishing, badscience, big pharma, duplicate publication, regulating research | 16 Comments »

Ben Goldacre
The Guardian,
Saturday October 25 2008

Welcome to nerds’ corner, and yet another small print criticism of a trivial act of borderline dubiousness which will ultimately lead to distorted evidence, irrational decisions, and bad outcomes in what I like to call “the real world”.

So the ClinPsyc blog (clinpsyc.blogspot.com ) has spotted that the drug company Lilly have published identical data on duloxetine – a new-ish antidepressant drug – twice over, in two entirely separate scientific papers.

The first article is from the January 2008 edition of the Journal of Clinical Psychiatry, a study which concludes that the “switch to duloxetine was associated with significant improvements in both emotional and painful physical symptoms of depression”. The second concluded the same thing.

ClinPsyc went through both papers and checked – with the characteristic anality of a practising academic – all the numbers in the data tables, finding that they were both essentially identical. A few different subscales were reported in each paper, and the emphasis in the second is more on pain than depression, but other than that, this is identical data, from two apparently unique papers.

There are several reasons why this is interesting.

Firstly, duplicate publication is a serious problem, because it distorts a reader’s impression of how much evidence is out there. If you think there are two trials showing that something works, then obviously that’s much more impressive than if there’s just one. “Of course I prescribe it”, you can hear the doctors say: “I’ve seen two trials showing that it works”.

I got onto the lead author of the paper, who explained that the second paper expanded more on the “pain” aspects of the results. That is slightly fair enough. He also claimed that the second paper referenced the first. This is true in the strictest sense of the phrase: it did indeed make reference to its existence as a previous experiment, but it gave absolutely no indication that this was the same experiment, and for the reader, without going forensic on the numbers (or going to the extreme lengths of checking the trial registry ID number against that of every previous experiment ever done in the field), there was literally no way to know that the data here was all from that previous study, and largely, simply, reproduced. It looked like two studies. It just did.

And it’s not just about planting duplicate information in prescribers’ memories. Duplicate publication can also distort the results of “meta-analyses”, big studies where the results of lots of trials are brought together into one big spreadsheet. Because then, if you can’t spot what’s duplicated, some evidence is actually counted twice in the numerical results. This is why it is more acceptable to publish duplicates if you at least acknowledge that you have done so. By way of example, I am being clear that I will now rehash a paragraph I wrote several years ago on the work of Dr Martin Tramer.

Martin Tramer was looking at the efficacy of a nausea drug called ondansetron, and noticed that lots of the data in a meta-analysis he was doing seemed to be replicated: the results for many individual patients had been written up several times, in slightly different forms, in apparently different studies, in different journals. Crucially, data which showed the drug in a better light were more likely to be duplicated than the data which showed it to be less impressive, and overall this led to a 23 per cent overestimate of the drug’s efficacy.

But the other thing to notice about this duloxetine experiment – so good they published it twice – is that its design made the Durham fish oil “trial” look like a work of genius. There was no control group, and it simply looks at whether pain improves after swapping to duloxetine from a previous antidepressant (either instantly, or with a gradual cross-over of prescriptions, perhaps to induce a vague sense that one thing is somehow being compared with another).

You don’t need to be a professor of clinical trial methodology to recognise that some peoples’ pain will improve anyway, under those conditions, regardless of what’s in the pill (and regardless of whether prescriptions are tapered into each other), through a combination of the placebo effect, and the fact that sometimes, in fact quite often, things like pain do just get better with time.

And this might have been a worthwhile study to do if you had good grounds to believe that duloxetine really did improve physical pain in depression – as Lilly have claimed for a while – and you just needed to work out the best dosing regime, but sadly a meta-analysis published earlier this year looked at all the evidence for that claim. Its title is “Duloxetine Does Not Relieve Painful Physical Symptoms in Depression: A Meta-Analysis”.

Nobody knows how common duplicate publication is in academia, with a steady trickle reported in the journals, and now a steady trickle being spotted by bloggers. In fact, just two days after ClinPsyc published their story, The MacGuffin (chekhovsgun.blogspot.com/ ) found an identical story around a different drug. Just from mentioning this story this morning I’ve picked up another from my friend Will in the room next door. These are afterthoughts by academics, water cooler comments, but once posted on the internet, the greatest photocopying machine ever invented, they become searchable, and notable, and very slightly embarrassing.


++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

16 Responses



  1. peterd102 said,

    October 26, 2008 at 4:47 pm

    Nicely Spotted. Id like it to be noted that this can work both ways, we are just cynical people. There can be publication bias away form positive studies if there are already a few good ones out there, orif the data isnt very impressive. There can be duplicate negative studies too. However whichever way they swing they are still going to be an appalling problem.

  2. mjs said,

    October 27, 2008 at 12:23 am

    “Of course I prescribe it”, you can hear the doctors say: “I’ve seen two trials but, ha! I never read them showing that it works”.

    I would hope not. The titles of the papers are strikingly similar. Even the abstracts look a lot like mirror images.

    Who would cite a study as a reason to prescribe, based only on unsubstantiated rumor?

    …Oh. Heh, right. I forgot.

  3. Robert Carnegie said,

    October 27, 2008 at 12:42 am

    The Durham fish oil…incident, couldn’t be a work of evil genius, perhaps? Or would you expect an evil genius to be cleverer at it – or maybe they were just clever enough?

  4. The Biologista said,

    October 27, 2008 at 2:43 am

    “Jesus”. Is all I can say. On the upside, this proves that we can publish almost any crap. Twice.

  5. IMSoP said,

    October 27, 2008 at 2:50 pm

    Maybe someone should develop an automated system for detecting duplicate publication. Something along the lines of the anti-plagiarism systems used by some Universities, but with particular algorithms for detecting and comparing raw data, and so on.

    This could be run by the editors of the journals themselves, and also by anyone with electronic access to the relevant journals. Then if a journal failed to follow up an indication of duplication, and at least print an appropriate clarification, it would be painfully obvious that they’d done so.

  6. The Biologista said,

    October 27, 2008 at 8:07 pm

    Just an algorithm that scans through Pubmed or some sort of centralised database of everyone’s raw data? Either might certainly make matters clearer. I’m sure some creative statistics/normalisation/unit changes could fool an algorithm though. And we’d be back to doing it by hand again.

  7. aidan said,

    October 27, 2008 at 10:54 pm

    … or, or as well as, a database of duplicate trials? anyone could flag a matching pair, then maybe a few moderators for different research fields, could ok it, and it goes in the database. then you’ve got a list of studies to discount from a meta analysis. and you could generate data on which topics, researchers, products, etc., do the most duplicate trials, and where they’re most published.

  8. pseudomonas said,

    October 28, 2008 at 10:25 am

    IMSoP: You mean, like spore.swmed.edu/dejavu/duplicate/ ?

  9. pseudomonas said,

    October 28, 2008 at 10:40 am

    (OK, to be fair the Deja vu system is only run on the Medline database, so only on abstracts and simple metadata, but it makes an interesting read, anyway).

  10. The Biologista said,

    October 28, 2008 at 2:47 pm

    Thanks for the link pseudomonas. Its a step in the right direction anyway…

  11. monoculus said,

    October 29, 2008 at 3:39 am

    This is really a failure of the review process. If the peer reviewers had had a look at the references, they might have caught the duplication prior to publication.

    This can be tough to catch when manuscripts are submitted concurrently to different journals, as looks to be the case here. However, reviewers can and should request to see manuscripts cited as “submitted” when doing their reviews.

  12. sean.salvador said,

    October 29, 2008 at 11:38 am

    kleptonat

    No, that is not Salami slicing at all.
    Sorry, just thought i’d point that out (i’m a pedant like most in here. It’s like Americans with irony that one).

  13. kleptonat said,

    November 5, 2008 at 1:55 am

    Sean

    Can you offer an alternative definition of Salami slicing perhaps?

  14. kleptonat said,

    November 6, 2008 at 11:02 pm

    Salami slicing and the least publishable unit.

    It’s wiki, but it’s a good description.

    en.wikipedia.org/wiki/Least_publishable_unit

  15. sean.salvador said,

    November 21, 2008 at 1:57 pm

    Yes, its about taking that ‘lest publishable unit ans rounding all the numbers down. You then bank the unpublishable units. Imagine millions of bank accounts and transactions, billions even, everyday where all the small fractions of a penny are rounded down, thats a lot of money no? now imagine banking them! thats salami slicing as i know it.

  16. PsyStef said,

    April 2, 2012 at 5:13 pm

    I can see this has been posted a couple of years ago; still, I was sufficiently shocked to look up whether there aren’t / weren’t any guidelines prohibiting this. And look and behold; the ICMJE rather clearly formulated (in the Uniform Requirements for Manuscripts Submitted to Biomedical Journals) that all papers must contain …
    “[A] footnote on the title page of the secondary version [that] informs readers, peers, and documenting agencies that the paper has been published in whole or in part and states the primary reference. A suitable footnote might read: “This article is based on a study first reported in the [title of journal, with full reference].”

    If that’s missing, a set of actions can be taken with respect to submitted or published papers:
    “If redundant or duplicate publication is attempted or occurs without such notification, authors should expect editorial action to be taken. At the least, prompt rejection of the submitted manuscript should be expected. If the editor was not aware of the violations and the article has already been published, then a notice of redundant or duplicate publication will probably be published with or without the author’s explanation or approval.”
    [source: www.icmje.org/publishing_4overlap.html%5D

    I think we probably don’t need a database — if these guidelines are more stringently enforced, that’ll hopefully be sufficient. (Although some more recent cases suggest to me that this is a problem with ‘a law not enforced…’)