Ben Goldacre, The Guardian, Saturday 15 January 2011
Sometimes something will go wrong with an academic paper, and it will need to be retracted: that’s entirely expected. What matters is how academic journals deal with problems when they arise.
In 2004 the Annals of Thoracic Surgery published a study comparing two heart drugs. This week it was retracted. Ivan Oransky and Adam Marcus are two geeks who set up a website called RetractionWatch because it was clear that retractions are often handled badly: they contacted the editor of ATS, Dr L Henry Edmunds Jr, MD to find out why the paper was retracted. “It’s none of your damn business,” replied Dr Edmunds, before railing against “journalists and bloggists”. The retraction notice is merely there “to inform our readers that the article is retracted”. “If you get divorced from your wife, the public doesn’t need to know the details”.
ATS’s retraction notice on this paper is uninformative and opaque. The paper went “following an investigation by the University of Florida, which uncovered instances of repetitious, tabulated data from previously published studies.” Does that mean duplicate publication, two bites of the cherry? Or maybe plagiarism? And if so, of what, by who? And can we trust the authors’ other papers?
What’s odd is that this is not uncommon. Academic journals have high expectations of academic authors, with explicit descriptions of every step in an experiment, clear references, peer review, and so on, for a good reason: academic journals are there to inform academics about the results of experiments, and discuss their interpretation. But retractions form an important part of that record.
Here’s one example of why. In October 2010 the Journal of the American Chemical Society retracted a 2009 paper about a new technique for measuring DNA, explaining it was because of “inaccurate DNA hybridization detection results caused by application of an incorrect data processing method”. This tells you nothing. When RetractionWatch got in touch with the author, he explained that they forgot to correct for something in their analysis, which made the technique they were testing appear to be more powerful than it really was, and actually they found it’s no better than the original process it was proposed to replace.
That’s useful information, much more informative than the paper simply disappearing one morning, and it clearly belongs in the academic journal the original paper appeared in, not an email to two people from the internet running an ad hoc blog tracking down the stories behind retractions.
This all becomes especially important when you think through how academic papers are used: that ACS paper has now been cited 14 times, by people who believed it to be true. And we know that news of even the simple fact of a retraction fails to permeate through to consumers of information.
Stephen Breuning was found guilty of scientific misconduct in 1988 by a federal judge – which is unusual and extreme in itself – so most of his papers were retracted. A study last year chased up all the references to Breuning’s work from 1989 to 2007, and found over a dozen academic papers still citing his work, some discussing it as a case of fraud, but around half – in more prominent journals – still cited his work as if it was valid, 24 years after its retraction.
The role of journals in policing academic misconduct is still unclear, but clearly, explaining the disappearance of a paper you published is a bare minimum. Like publication bias, where negative findings are less likely to be published, this is a systemic failure, across all fields, so it has far greater ramifications than any one single, eyecatching academic cockup or fraud: unfortunately it’s also a boring corner in the technical world of academia, so nobody has been shamed into fixing it. Eyeballs are an excellent disinfectant: you should read RetractionWatch.
mike stanton said,
January 15, 2011 at 1:12 am
So what should the Lancet and other journals be doing now about all the papers they published by Andrew Wakefield?
AndrewKoster said,
January 15, 2011 at 10:25 am
Very good point. Just as it should be clear (in the article) why the contribution is good, if it later turns out a mistake was made the same journal should have the obligation to notify readers what the mistake was and preferrably also why this invalidates the findings.
ofoi said,
January 15, 2011 at 12:14 pm
This is odd. There are very clear guidelines on this. See, for example, the UK Research Integrity Office:
tinyurl.com/329e73t
The problem here is journal editors not applying existing ethical guidelines, not their lack of existence.
VoxHortus said,
January 15, 2011 at 7:39 pm
Transparency in the reasoning behind retraction would also be useful for not throwing the baby out with the bath water. If the retraction was provoked by one specific issue with the paper, then readers could a) learn from the situation, and b) glean what other useful content or even results might remain that are germane to their own work.
Granted this would be a more likely scenario in the event of a correction vs. a retraction, but nevertheless, I’m still a fan of transparency, even if it is Transparency Theater. The NOYB comment smacks of embarrassment.
Chris Nedin said,
January 16, 2011 at 12:47 am
Maybe there a need for journals to maintain an online database of retracted papers. Authors would then have to consult these databases to confirm that their references are still appropriate as part of the submission process.
chris lawson said,
January 16, 2011 at 11:13 am
The other important behaviour change that needs to take place is that retracted papers need to stop being wiped from the journal’s archives. It’s not like the journal can magically unpublish the paper — it will exist in multiple copies in libraries and on researchers’ hard drives around the world. Far better to mark the paper with a big Retraction Alert, plus a meaningful summary of the reasons for the retraction, plus an edit of the abstract to inform those people who only read (or only have access to) the abstract.
Tom Morgan said,
January 16, 2011 at 4:50 pm
I second what Chris Lawson says – it should be like Caselaw is kept in Lexis or Westlaw – if Case A v B gets overruled, the case summary still exists; it’s marked as ‘overruled’ with a link to case X v Y that overruled the original. If a paper gets retracted, it should link to some kind of retraction notice.
Snout said,
January 18, 2011 at 10:21 am
Sorry, was that “instances of repetitious, tabulated data from previously published studies” or “instances of repetitious, fabulated data from previously published studies”?
I haven’t got my glasses on.
vintermann said,
January 19, 2011 at 10:08 am
It was in relation to a completely different field, but I saw a good quip about this topic on a mailing list today:
“There are three classes of researchers, listed from best to worst:
1) Undergraduates. These are the best researchers, because they must write a paper regardless of whether they succeed or fail. So you always find out what happened.
2) Other academics. Generally, these researchers only publish when they have a success. If we are lucky, we might find out about something that failed, but only if there is a success to report.
3) Non-academics. We will never tell our secrets. An academic will have to discover them independently, and then we may insinuate that we tested that idea years ago, but discarded it because we found something better.”
So medicine isn’t the worst field 🙂
pootassium said,
January 24, 2011 at 4:21 pm
No surprise it was a surgery journal. Specialty journals tend to not use quality checklists or guidelines and have incomplete reported outcomes. Pick up a medical specialty journal and apply the Jadad scale, it’s shocking. Especially because studies have found 30–50% exaggeration of treatment efficacy when results of lower-quality trials were pooled.
For guidelines, there is the Consolidated Standards for Reporting Trials (CONSORT) statement. The use of it has been associated with improved RCT reporting. But still several studies demonstrate suboptimal reporting and most don’t adhere to the checklist.
The common errors are: inadequate allocation concealment, inappropriate uses of composite variables, the omission of non-significant findings, incomplete outcomes, no 95% confidence interval for the primary outcomes, inadequate randomization techniques, insufficient blinding, small samples and the failure to report allocation concealment, the use of placebo or not and the information who was blinded.
This January issue of The Lancet published how they are going to use CrossCheck to check for plagiarism, duplication and text recycling. They also have Protocol Reviews, which aim to assess protocols and to encourage good design of clinical research. The BMJ is requiring a checklist to be submitted with trials indicating where in the manuscript specific details are reported.
It kind of is a free market in research. And it seems like poorer quality articles are more commonly completed by a group of only medical doctors.
another_mrlizard said,
January 24, 2011 at 5:38 pm
Blue Monday may be a fantasy, but Tit Monday is very real
www.titmonday.com/
pnigos7 said,
January 24, 2011 at 6:06 pm
Surgical journals are different.
In 2003 the monthly publication “Current Problems in Surgery” published an entire ninety three page issue on the topic “Ethical Issues in Surgical Treatment and Research (sic)”. In 2007 the entire issue was retracted from the web by the editor with a cryptic note
Turns out that most of this article was pure plagiarism. You can’t find the original on the web, but copies are of course available in libraries throughout the world.
If they ever award a Nobel prize for plagiarism, I shall nominate this publication. How can you top this kind of scientific misconduct in a review of ethics ?
Tiddles said,
January 26, 2011 at 8:17 am
Journal editors should have a responsibility to notify the NLM whenever they retract or remove a paper, or when there are post hoc corrections. Once this had been done, the NLM could ensure that they still came up in Medline searches, but complete with a notice that described why the paper was not available. Researchers could then be made aware of that bit of track record that people like to cover up!