Saturday 30 May 2009
Obviously we distrust the media on science: they rewrite commercial press releases from dodgy organisations as if they were health news, they lionise mavericks with poor evidence, and worse. But journalists will often say: what about those scientists with their press releases? Surely we should do something about them, running about, confusing us with their wild ideas?
Now you may be inclined to think that a journalist should be capable of doing more than simply reading, and then rewriting, a press release: but we must accept that these are troubled times. Through our purchasing behaviour – and I assume someone cleverer than me measures these things competently – we have communicated to newspapers that we want them to be large and cheap, more than we want them to be adequately researched.
So in this imperfect world, it would be useful to know what’s in academic press releases, since these are the people of whom we are entitled to have the highest expectations. A paper in the Annals of Internal Medicine this month shows very clearly that we have been failed.
Researchers at Dartmouth Medical School in New Hampshire took one year’s worth of press releases from 20 medical research centres, a mixture of the most eminent universities and the most humble, as measured by their US News & World Report ranking. These centres each put out around one press release a week, so 200 were selected at random and analysed in detail.
Half of them covered research done in humans, and as an early clue to their quality, 23% didn’t bother to mention the number of participants – it’s hard to imagine anything more basic – and 34% failed to quantify their results. But what kinds of study were covered?
In medical research we talk about the “hierarchies of evidence”, ranked by quality and type. Systematic reviews of randomised trials are considered to be the most reliable: because they ensure that your conclusions are based on all of the information, rather than just some of it; and because randomised trials – when conducted properly – are the least vulnerable to bias, and so the “most fair tests”.
After these, there are observational studies (“people who choose to eat vegetables live longer”) which are more prone to bias, but may be easier to do. Then there are individual case reports. And then, finally, because medical academics like to think they’re funny, right at the bottom of the hierarchy you will find something called “expert opinion”.
In this study, among the press releases covering human research, only 17% promoted the studies with the strongest designs, either randomised trials or meta-analyses. 40% were on the most limited studies: ones without a control group, small samples of less than 30 participants, studies looking “surrogate primary outcomes” (a blood cholesterol level rather than something concrete like a heart attack, for example), and so on.
That’s not necessarily a problem. Research is always a matter of compromise over what is practical, or affordable: it would be nice to randomise every single patient, everywhere, whenever there is any uncertainty over which is the best treatment for their condition, and perfectly follow their progress, but that would be quite a piece of work. It would be nice to randomise everyone in the country to different lifestyle choices at birth, to see which had the most significant impact on their health, so that in 70 years time we would have a comprehensive story on the best way to live, but it’s not administratively realistic, and it’s hard enough to get people recruited and cooperating in a brief 3 week study, let alone lifelong change.
So people conduct imperfect research, knowing that it is the best we can do with the resources available, knowing that the results must be interpreted with caution and caveats. This isn’t “bad science”, because the studies themselves are – we assume – well conducted, and faithfully described in their publications. The errors come at the level of interpretation, where people fail to acknowledge the limitations of the evidence.
That failure is a crime, but is it limited to quacks and hacks? No, and that is the key finding of this new paper. 58% – more than half – of all press releases from this representative sample of academic institutions lacked the relevant cautions and caveats about the methods used, and the results reported. I would like journalists to be experts in their field – and I don’t think they could be bluffed as easily by a politician and a sports personality as they are by a science press release – but make no mistake, this is a war on all fronts.