As someone who is nerdishly fascinated by the systematic analysis of health risk data – check me out, ladies – I sometimes look at the health pages and try to work out what they’re supposed to do, what kind of information they offer, and for who.
This week, for example, you’ll have found: “Teenager helps his twin brother by donating a piece of his back“; “In pain? Take one Botticelli three times a day“; “Taking antibiotics to prevent premature birth can ‘increase risk’ of cerebral palsy“; “Woman dies in agony after eating toadstools“, “Why drinking water to shed weight is a waste of time“; “More men suffering from ‘Manorexia’, health experts warn“; and “Cheap bone-strengthening drug cuts chance of breast cancer relapse ‘by a third’“.
Are people around the nation carefully clipping these stories out, and pasting them in indexed box files, ready for the day when they develop the condition in question, or encounter the opportunity to modify an unusual health risk exposure? And how will they know if the data they are gathering is complete, or just an arbitrary patchwork of newsworthy and self-serving information, multiply filtered through a range of imperfect agents with diverse interests and allegiances? In fact, how does anybody know that?
This week the media collectively ignored a study looking at that exact question, but in a high stakes environment. It was also one of the most important papers to be published this year: only one in five trials on cancer treatment, it turns out, actually gets published; the rest are simply missing in action. And it gets worse: only 5.9% of industry-sponsored trials on cancer treatment get published. Later, it will get worse again.
For decades now people have known that negative results tend not to get printed in academic journals, and it can happen for all kinds of reasons (eg here but many more): they’re not newsworthy, so journal editors reject them; they’re not much fun to write up, and they don’t look good on your CV (although they should get plaudits); and they might not flatter your idea or product, so you might not want them out there.
One bright suggestion which I bang on about incessantly was that all clinical trials should be registered before they begin: then people can at least stand a chance of noticing if and when a trial goes missing in action. This took about 20 years to be put into practise, half-heartedly, and it solves half the problem. But there is a problem: who will chase up the missing studies?
Scott Ramsey from the Fred Hutchinson Cancer Research Center in Seattle and John Scoggins from the University of Washington took on the role of investigative journalists. In a world where not one person from the world of alternative therapies can bring themselves to criticise even vitamin pill entrepreneur Matthias Rath for his dangerous practices in South Africa – indeed some still actively support him, as we may soon see – critical self-appraisal is simply business as usual in academia.
They went to clinicaltrials.gov, the register run by the US National Institute for Health, and found all the trials on cancer: 2,028 in total. Then they went to Pubmed.gov, the searchable database containing almost all the proper medical journals out there.
The overwhelming majority were missing.
Only 17.6% were published at all, but better than that, 64.5% of those reported positive results. How impressive. Meanwhile only 5.9% of industry-sponsored trials were published, but in those 5.9%, golly did they do well: 75.0% gave positive results. Just lucky fingers I guess.
We may never know what was in all that unpublished data, but for all that this study is nerdish and lacking in newsworthiness, those missing numbers will cost lives in quantities larger than any emotive health story covered in any part of any newspaper this week. Doctors need negative data to make rational prescribing decisions. Academics need to know which ideas have failed, so they can work out why, what to study next, and which ideas to abandon. In their own way, academic journals are exactly as selective as the tabloid health pages. God help us all.
Sili said,
September 20, 2008 at 12:49 pm
Thank you for ruining my mood. (I happen to be on SSRIs, funnily enough.)
What a downer after the good news.
So how do we go about getting this data public? Insist on something like arXiv for medicine? Refusal to allow new trials to take place unless old trials have been submitted to such a MedarXiv in a timely fashion? Say three years after end of trial?
alan_b said,
September 20, 2008 at 3:44 pm
What does this mean for meta-analyses? If most of them are done from published studies, and the authors have no idea of what unpublished data exists, do they have any validity at all?
Robert Carnegie said,
September 20, 2008 at 5:36 pm
I want to unpack the claim that unpublished research “will cost lives in quantities larger than any emotive health story covered in any part of any newspaper this week. Doctors need negative data to make rational prescribing decisions. Academics need to know which ideas have failed, so they can work out why, what to study next, and which ideas to abandon” – which is the basis for the question: “Is the Daily Mail any worse than your average academic journal?” That is, in Quality-Adjusted Life Years.
Well, the Daily Mail has a much larger circulation than most medical journals. It touches more lives. So if a piece in the newspaper is the cause of a proportion of the population getting sick or dead, it’s probably a greater impact.
It also comes out once a day, but we probably should compare issue to issue. One annual journal to one day’s newspaper.
And… surely if a prototype medical treatment is effective, those are the ones that will be published; and for those that aren’t, silence speaks volumes? (Well, not “volumes”. Just “You don’t hear of anyone getting this to be effective, what does that tell you?”)
projektleiterin said,
September 20, 2008 at 6:16 pm
You’re sooo hot. 😀
Ok, but still an interesting post. 🙂
evidencebasedeating said,
September 20, 2008 at 6:17 pm
Not all sections of the media take the lemming approach. The brilliant ‘Body and Soul’ section of The Times every Saturday represents IMHO some of the best health reporting around. See the wonderful Mark Henderson today for example, on Rath:
www.timesonline.co.uk/tol/life_and_style/health/article4786723.ece
classy
twaza said,
September 20, 2008 at 9:43 pm
Another nerdish comment about unpublished trials.
There is a simple, free solution to the problem of unpublished studies (and secret data).
Trials have to pass ethical committee review before they can start. And ethical committees require the trial’s protocol to state that the study will obtain informed consent from participants.
Ethical review committees should require the trial’s protocol to state that the informed consent document will explain that (i) the results will published within reasonable time, and (ii) the raw data will be made available (appropriately anonymized) to any researcher who requests it.
mjs said,
September 21, 2008 at 6:11 pm
Sorry this is off topic…
Muscleman, is the Mechs of Dev.database the same as what is available over at MGI?
muscleman said,
September 21, 2008 at 8:04 pm
Pass, though all that was before the push to knockout every mouse gene. I’m a bit put out about the project since the one thing I am good at is not one of the things being analysed from the knockouts.
Writersam said,
September 22, 2008 at 9:17 am
The FDA has ruled that any trial registered at clinicaltrials.gov MUST be disclosed on that same website within 1 year of the last subject being examined or receiving intervention. Delay is only allowed if there is a threat to national security – waiting to publish in a peer review journal is not an excuse, and anyone found in breach will be fined and named and shamed. They are currently working on a suitable format for reporting these results which can be used for all trials, and interestingly, journal editors appear to be slightly confused as to how this might affect ‘prior publication’. (there was an article about this earlier int he year in the BMJ).
Ben Goldacre said,
September 22, 2008 at 10:03 am
i think all data should be published, even if it’s terrible, and the reader can make their mind up. peer review is no barrier to poor quality work being published, and you can’t say it’s unacceptable to publish wonky science just because some morons will overinterpret it. people need to get over the idea that just because something is published you can automatically take the conclusions as read. that’s dumb, to differing degrees, on ANY paper.
muscleman said,
September 22, 2008 at 10:43 am
You are right Ben, it is dumb but that is what the practitioners of Woo do with science. They cherry pick papers, often very old ones and use them as fig leaves to impress the public. You and I know that no paper has a stranglehold on the truth but that is not how the public see it when casting about for any port in a storm. As the Wakefield saga showed aided and abetted of course by journalists ignorant of science and keen to sell a big story.
The whole point of peer review is that bad papers shouldn’t get published which is also why we have ratings for journals. If we publish crap just because as though it were normal business then it will bring science into disrepute. We therefore need a different model wrt the ‘missing’ trials. Perhaps their data should be available, but that does not mean it should be published in papers.
Ben Goldacre said,
September 22, 2008 at 10:51 am
sure, i think it might need to be a slightly different publication model than journals – although as i said there are some terrible journals already, and as you say, they are being quoted merrily by woo merchants.
as a basic point, it is clearly unacceptable that science should not be published simply because it might be misunderstood by journalists or misrepresented by business people from the quack industries.
Lafayette said,
September 22, 2008 at 12:15 pm
Having worked in a very minor capacity on a clinical study that had a real problem in recruitment and drop-out I could imagine it would indeed be benefitial for trials that fail to leap their statistical hoops to be published. Scientific protocol is all well and good, but trial management ought to be on the agenda more than it is at present. Being able to weigh up recruitment methods, successful and failed, should be terribly useful.
In terms of woo merchants cherry-picking studies, with the development of bullshit journals being set up to print any nonsensical study willing to pay for entry and face exclusively positive peer review, I think that particular ship has flown any way.
Joe Dunckley said,
September 22, 2008 at 1:20 pm
Negative results can be published in BMC Research Notes or J Neg Results in Bio Med! Of course, in an ideal world, it shouldn’t even be the case that dedicated journals are needed for negative results; apparently some people still don’t recognise their value, though.
I believe some(most?) trial registers are set up to allow public access to data and results in the register (though, as you say, it’s not enforced). Additionally, it is still possible to retrospectively register trials — thus, even if your preferred journal demands that trials be registered, you can go ahead and register after you’ve guaranteed your positive results. Under the current system, I guess that for many researchers trial registration is just another hoop to jump through when submitting their work for publication.
There are all sorts of initiatives to move trial registration to the next stage, and mandate publication of results, but I’m not sure how likely they are to succeed…
duboing said,
September 22, 2008 at 1:54 pm
In environmental data management we are currently discussing the feasibility of assigning “citation” ratings to data sets. A contribution of data to the scientific community is a contribution to science. A negative result to one researcher may be of great value to another with different insights, or when added to a wider/more extensive data set.
My employer funds environmental research on the proviso that all resulting data is lodged with one of its nominated data centres. Our data holdings are freely available to the scientific and general public, and subject to charge for commercial users, but with measures in place to protect the intellectual property of the data provider.
We hold a pretty big stick, in that future funding may be withheld from scientists who fail to comply with data submission. I wonder how we would police the submission of data paid for with independent money..?
DrJ said,
September 22, 2008 at 2:13 pm
What is called for here is a new series of Journals, perhaps we could persuade the nature publishing group to start ‘Nature Negative Results’. This would be an online only free to access deposit of all the clinical studies and PhD projects which don’t make it into the main stream journals. You could even dispense with peer review all together for this journal and tag an appropriate caveat that the contents should be treated with some skepticism.
Diversity said,
September 22, 2008 at 5:13 pm
I hit this problem when trying to trace something through clinicaltrials.gov a few weeks ago. It struck me as downright careless and wasteful not to have a serchable site somewhere which allows one to see for each trial:
A result obtained/not obtained
For “a result obtained”:
Promising/Unclear/Unpromising
An academic contact for further information.
Some thing like that would avoid the deadweight cost of publishing a lot of useless information in academic form; give the principal steer from each trial easily and quickly; and let those who need to track back to work that is relevant to them.
peterd102 said,
September 22, 2008 at 5:24 pm
Thinking about it my comment before is solved by Bens idea of registering trials before they start with all intentions, bad trials are not allowed to be preformed, thus we only have good studies.
It is a shame that Bettween the public and the scientists there is an ineffecient middle man – the media.
What there needs to be is more regulation of the media to stop them printing incorrect facts, rather than not publish certain research as I previously stated.
peterd102 said,
September 22, 2008 at 5:26 pm
Oh btw about the title of this page. Which average are you using Ben? lolz
Finger waggler said,
September 22, 2008 at 9:03 pm
Erm, Ben, the Grauniad seems to be trying to out-nonsense the Daily Hate?
www.guardian.co.uk/science/2008/sep/21/medicalresearch.health
Did the author of this ‘most interesting’ piece think to ask your opinion of it? Or is it all true (i think the lack of a proper control group as opposed to no treatment seems to be required…).
I think it’s time you sent Mr Campbell the sort of email that would make Giles Coren blush…
muscleman said,
September 22, 2008 at 11:13 pm
Finger Waggler the Grauniad report says it is a meta analysis, hence no description of the methods or a control group. It also says at the end that it acts to reduce stress, a good placebo type response and I doubt Ben would have any problem with this conclusion and neither would I. It doesn’t mean that acupuncture has real activity, only that it is an acceptable placebo.
Note that getting women trying to get pregnant to ‘take’ anything will be inherently problematic so acupuncture is perhaps a good way to achieve stress relief. Probably massage would have the same effect. Though it might be a bit too hands on for some people so less acceptable.
pharmadefender said,
September 22, 2008 at 11:30 pm
New to the debate and am sure that my chosen screename will not inspire a warm welcome, but this column and these postings seem to leave out an important element in the debate about clinical trial registry and publishing.
Ben chose to highlight that 27% of all, among which nearly 79% published positive results, highest of all.) You further intimated that this can impact prescribing decisions.
Nearly 82% of the 2,028 trials included in Ramsey and Scoggins’ analysis were Phase I or II, neither of which should necessarily influence prescribers’ behaviours, as it is much less likely that these trials will include compounds that have marketing authorisation. Frustratingly, the article did not include a cross-tabulation of Sponsor type by Phase, which would be interesting to look at.
In my view, I don’t believe private-sector, for-profit companies are obliged to publish in a PubMed-citeable reference the results of all trials including those where an investigational compound has failed and the trial does not progress, as this would represent an additional cost burden on a pursuit where they will not have a commercial return. (That’s not to say I don’t believe that all clinical trials should actually be reported and shared in a free and easy-to-access way.)
I’m curious to know what your view is and whether you feel this is something you might raise in a future column.
allshallbewell said,
September 23, 2008 at 12:00 am
Ben,
“you can’t say it’s unacceptable to publish wonky science just because some morons will overinterpret it”
Not sure ‘morons’ is a good word in this context. You’re describing people deliberately skewing the message in their favour (i.e. smart), or people believing such distortions.
A lack of morals or education (or both) doesn’t make someone a moron.
Finger waggler said,
September 23, 2008 at 8:12 am
It may be a meta analysis, but the ‘methods’ are described as;
Researchers led by Ying Cheong, from the reproductive medicine unit at the University of Southampton and the city’s Princess Anne hospital, concluded that ‘acupuncture around the time of embryo transfer achieves a higher live birth rate of 35 per cent compared with 22 per cent without active acupuncture’.
The ‘proper’ control would have been sticking the pins in not-quite-the right-place. But, of course, the ‘researchers’ would then have known who was control vs treatment.
Women undergoing IVF are highly vulnerable to this sort of nonsense, and it seems rather unfair that potentially large numbers of them will be ‘pin-cushions’ in the name of placebo.
Finger waggler said,
September 23, 2008 at 9:36 am
I realise that Cochrane reports are considered of higher quality, and am somewhat vexed that such trials could be included. I assume, that trials covered in Cochrane would be required to have rigorous controls. If you are correct that quasi-random needle sticking is as ‘good’ as accupuncture, then i would find it very surprising that Cochrane would not reject such trials? Can this be good science ? Objective reviewers would surely know a placebo, adequate controls, and suitable study design ?
I’m missing something here… and am too lazy to read the primary papers…
Alethea said,
September 23, 2008 at 1:22 pm
For those who did not spot the link to the original commentary , there you go.
You can not force someone to put their negative results into a journal-publishable form.
You can certainly force them to publicly report their results on a registered clinical trial, by ensuring that anyone involved may no longer participate in any other clinical trials until such reporting has been performed (if it is delinquent).
“ClinicalTrials.gov currently has 61,995 trials with locations in 157 countries.” (Wow. That is a lot more than the 44,232 listed at the time of the cited analysis in September 2007.) If registration with ClinicalTrials has become the mandatory hoop, perhaps it is important to the public to simply ensure that trial designers finish their jump by reporting their results, published or not, in the same database?
Confounded kids said,
September 23, 2008 at 4:35 pm
I’m guessing that there’s a lot of legitimate reasons for registered trials not getting published. Failure to recruit is a big one, otherwise we wouldn’t have had over a decade’s worth of government money pumped into the UKCRN and related institutions which exist purely to stop trials failing through poor recruitment.
I think that, these days, most people working outside of Pharma company marketing departments might acknowledge, in their heart of hearts, that negative and equivocal results should be published. Many use Journal editor intolerance of boring studies as an excuse not to(and need to be referred to Joe Dunckley’s post above).
I think that more people would deny that failed studies should be published – and maybe a campaign is needed for failed studies, in the same way as we needed one for studies showing negative and equivocal results. They’re unlikely to give much evidence about efficacy, but they might give a lot of data about safety, compliance and clinician/patient confidence.
quietstorm said,
September 23, 2008 at 8:37 pm
I’ve been hoping and praying for a “Journal of Null Results” ever since I became a scientist…
In physics, it is unlikely that anyone will come to harm as a result of the results of experiments not being published, but I can’t help but think it would prevent a huge waste of taxpayer money if all research which produced negative results was published.
For example, you use a particular mathematical treatment to go through a difficult, nonlinear problem. In the course of working through the problem, you discover that it violates one of your original assumptions. You think, “damn, I’ve spent 6 months on that, and there was no way of spotting the violation until I’d worked through it”, yet, unfortunately, most journals wouldn’t touch a write-up of this failed treatment with a bargepole.
After three years you are sent a research proposal from another country to review which asks for a hundred thousand pounds to use exactly the same theoretical treatment that you discovered was invalid.
My point is that scientific papers are not just used to amass and interpret evidence, but can also be used to establish whether future research directions are worth pursuing. It is so difficult to decide how to use public money to fund science, I feel like the people making the decisions could do with all the help they can get.
heavens said,
September 23, 2008 at 10:52 pm
About five years ago, I was well into designing a preclinical study when I happened to chat with a senior (clinical) researcher. He asked what I was doing, and I gave him the two-sentence version, and named the fascinating study it was based on.
“Oh,” he said. “Nobody’s ever been able to replicate his work.”
I’ve shelved it for now, but I still don’t know what I’ll do in the end. Perhaps we’ll be the first to replicate it? I’ve talked to the original researcher (now working for a different institution with a different target), and he’s confident that we can do it. I can’t find anyone that will admit to having tried to replicate the work, but the fact is that the study was published 15 years ago, and people move. It sure would have been nice if people would put a short note in J Neg Results (or equivalent): Tried this, didn’t work, maybe it’s us, maybe it’s not. First-year chemistry students can write a report like this; why can’t we?
hannahp said,
September 24, 2008 at 8:18 pm
May I make a few still-more nerdish comments about the paper in The Oncologist? (I’m a married woman so it’s probably OK.)
“Then they went to Pubmed.gov, the searchable database containing almost all the proper medical journals out there.”
Mm. I’m an NHS librarian, and like many of my colleagues I tend to tell doctors that to be fairly sure of retrieving the majority of articles of interest, they need to search at least EMBASE as well as MEDLINE (MEDLINE is, mostly, the database that Pubmed is querying). See e.g. www.cfpc.ca/cfp/2005/Jun/vol51-jun-research-1.asp. So I’d be surprised if a Pubmed search retrieved every trial that was published.
OK, so the Pubmed search was a sensible back-up for their other measure of publication, the publication record in the trials registry (though this is not clear in their abstract). But while we’re in the business of niggling at methodology…
Trials were counted as “published” in Pubmed only if the trial registration number was found in the search. It’s easy to hypothesise publications which missed this number off; for example, journal articles which haven’t yet been fully indexed in MEDLINE (which can take months after publication) tend to have just a basic “in process” record, which might not hold the trial registration number.
There’s also the issue of time from registration to publication. The trials register began in 1999. Many cancer trials go on for five years plus (I don’t have a reference for that, it’s an unsubstantiated hunch based on the protocols that come through my local research committee). Chances are a number of completed trials finished fairly recently and just haven’t been published yet – it can take a fair old time, even if you’re swotty about writing up quickly & you’ve got a paper which makes a good story.
I don’t actually disagree for a moment with the conclusions drawn in this paper or with Ben’s comments. Good meta-analyses often do funnel plots to look for publication bias and they can be magnificently skewed. It is a fantastically important issue. But if this paper had been picked up in the mainstream media, could it have been a slightly over-sensationalised story about sensationalism?
Confounded kids said,
September 25, 2008 at 10:40 am
Hannahp – you’re right about time-lag influencing publication. Unfortunately there’s another dimension to it which does introduce bias. A Cochrane review of this issue found that trials with positive results (i.e. with statistically significant results in favour the experimental arm of the trial) tended to be published in approximately 4 to 5 years. Trials with null or negative results (i.e. not statistically significant or statistically significant in favour of the control arm) were published after about 6 to 8 years.
www.cochrane.org/reviews/en/mr000011.html
The result is that the systematic reviews that inform, for example, NHS purchasing policy, are often heavy on published interim analyses with positive results for surrogate outcomes (disease or progression-free survival) and light on mature trials with equivocal results.
This is a particular issue where the disease has a long natural history. For instance, recurrence of successfully treated breast cancer can happen over twenty years or more. So making policy on the basis of an one year interim analysis (as happened with the NHS, Herceptin and the HERA trial) rather than waiting to see what happens at, say, five years, might be seen by some as jumping the gun. Doubly so, if there’s another trial out there making the same comparison but with negative results which is difficult to find because it was only ever presented in a poster at a conference.
www.pharmac.govt.nz/2008/05/23/Accepted%20Authors%20Manuscript%2008CMT0415MetcalfeTrastuzumabDMc.pdf
If you want a really stunning example of how wrong things can go, especially when early analyses involving small numbers of events are involved, check out Wheatley and Clayton’s paper on the AML-12 trial. Early positive results in favour of the experimental intervention turned to harmful results later on.
www.ncbi.nlm.nih.gov/pubmed/12559643
In the eyes of the industry, the pr companies, patients and publications who lobby on their behalf, an interim analysis with negative results is just ‘immature’; an interim analysis with positive results is proof that the NHS should be buying their drugs.
Adam Zacks said,
October 20, 2008 at 9:14 pm
Just had to post on this. Quentin Letts in his Daily Mail article on ‘The 50 People Who Wrecked Britain; places Richard Dawkins in 30th position accusing him of:
‘Anti-religionist Dawkins, the best-known English dissenter since Darwin, is the merciless demander of provable fact.’
www.dailymail.co.uk/news/article-1070535/QUENTIN-LETTS-The-50-people-wrecked-Britain–2.html
Oh no! How dare he! Don’t people see that demanding provable fact just gets in the way of selling papers.
Does Mr Letts think that provable fact wrecks Britain? Perhaps he would like us all dead from beubonic plague through lack of anti-biotics. He’s actually playing a clever game. By claiming that the demand for provable fact is damaging he gets to have his cake and eat it or more applicably scare people out of MMR protection for their children and then dodge any responsibility for it. Wow! I wish I could live in his twisted world. But sadly, an ability to think critically got to me.