Saturday February 14 2009
This column is about tainted medical research, not MMR. Now don’t get me wrong: it’s still an interesting week to be right about vaccines. On Sunday, Brian Deer at the Times claimed that the medical cases in Andrew Wakefield’s 1998 paper were altered before publication. The measles figures came out: they’re up by 2000% over the last 7 years, and rising exponentially. On Friday, the Autism Omnibus court hearing in the US – a massive two year case involving 5,000 children – ruled there is no evidence for MMR causing autism (nor for the mercury preservative thimerosal). On Thursday a paper was published describing a measles outbreak among a group of anti-vaxxers in France, with one death, and the cases are spreading. And the radio station LBC behaved so abysmally over one irresponsible broadcast – you can read the gory details at badscience.net – that 140 blogs covered the story, and there is even an Early Day Motion about it in parliament (number 754, contact your MP through theyworkforyou.com and ask her to sign it).
There is no reason to believe that MMR causes autism. The anti-vaccine campaigners will continue to mislead, stifle, or even smear.
But it’s important to keep your head, and not be polarised by the other side’s foolishness: because there are plenty of genuine problems in vaccine research, even if the campaigners have focused on a bad – and perhaps simplistic – example.
The British Medical Journal this week publishes a complex study which is quietly one of the most subversive pieces of research ever printed. It analyses every study ever done on the influenza vaccine – although it’s reasonable to assume that its results might hold for other subject areas – looking at whether funding source affected the quality of a study, the accuracy of its summary, and the eminence of the journal in which it was published.
Now, in my utopian universe, it wouldn’t matter where a piece of research was published, or how bad its summary was, because everybody would read everything with a joyful and attentive anality. Back in the real world, it has been estimated that each month, journals publish over 7,000 items – studies, letters, and editorials – relevant to GP care, as just one example. New research is imporant, but to read and critically appraise all this material would take physicians trained in epidemiology over 600 hours: there are only 720 hours in a 30 day month. So inevitably people will read summaries – the “take home message”, and only the bigger journals.
So what did our new study find? We already know that industry funded studies are more likely to give a positive result for the sponsors drug, and in this case too, government funded studies were less likely to have conclusions favouring the vaccines. We already know that poorer quality studies are more likely to produce positive results – for drugs, for homeopathy, for anything – and 70% of the studies they reviewed were of poor quality. And it has also already been shown, in various reviews, that industry funded studies are more likely to overstate their results in their conclusions.
But Tom Jefferson and colleagues looked, for the first time, at where studies are published. Academics measure the eminence of a journal, rightly or wrongly, by its “impact factor”: an indicator of how commonly, on average, research papers in that journal go on to be referred to, by other research papers elsewhere. The average journal impact factor for the 92 government funded studies was 3.74; for the 52 studies wholly or partly funded by industry, the average impact factor was 8.78. Studies funded by the pharmaceutical industry are massively more likely to get into the bigger, more respected journals.
That’s interesting: because there is no explanation for it. There was no difference in methodological rigour, or quality, between the government-funded research, and the industry-funded research. There was no difference in the size of the samples used in the studies. And there’s no difference in where people submit their articles: everybody wants to get into a big famous journal, and everybody tries their arm at it.
An unkind commentator, of course, might suggest one reason why industry trials are more successful with their submissions. Journals are businesses, run by huge international corporations, and they rely on advertising revenue from industry, but also on the phenomenal profits generated by selling glossy “reprints” of studies, and nicely presented translations, which drug reps around the world can then use. Anyone who thought this was an unkind suggestion might need to come up with an alternative explanation for the observed data.
This study is a fascinating example of the academic community turning in on itself, and using the tools of statistics and quantitative analysis to identify a nasty political and cultural problem, and give legs to a hunch. This could and should be done more, in all fields of human conduct.
But the greater tragedy is that the problem Jefferson and colleagues have revealed could easily be fixed. In an ideal world, all drugs research would be commercially separate from from manufacturing and retail, and all journals would be open and free. But until then, since academics are obliged to declare all significant drug company funding on all academic articles, it might not be too much to ask that once a year, since their decisions are so hugely influential, all editors and publishers should post all their sources of income, and all the money related to the running of their journal. Because at the moment, the funny thing is, we just don’t know how they work. Remember the Early Day Motion.
Please send your bad science to email@example.com