Saturday November 26, 2005
The moment I saw the press release for the new Bristol Homeopathy study, I knew I was in for a treat. This was a fabulously flawed “survey”, no more, in which some doctors asked their patients whether they thought they’d got better a while after having some homeopathy. Not meaningless data in itself, but the action, as ever, is in the interpretation, and the interpretation was at its most cock-eyed in the Daily Telegraph.
“The other reason the survey is causing such delight,” gushed Elizabeth Grice, in her public love note to David Spence of the Bristol Homeopathic hospital, “is that it contradicts a scathing report published in the Lancet recently by Professor Matthias Egger.” Now scathing is not a word I would use to describe a rather sober and – by design – tedious meta-analysis. But she is also, weirdly, suggesting that a large systematic review and meta-analysis of a huge number of placebo controlled randomised trials is somehow contradicted by a survey from some homeopaths of their customers’ satisfaction.
Did she feel there were any flaws in the Bristol study, any need for balance? Yes: “This has led to a medical ding dong in the long-running debate about the value of homeopathy, with Egger (known in the profession as Eggy) accusing Spence and his colleagues of failing to use a ‘control group’ for comparison and Spence retorting that his huge observational study – the largest of its kind ever published – involving 23,000 consultations with no exclusions and no bias, is a pure measure of achievement. ‘It’s what I call a ‘real world’ analysis’, he told me. ‘It’s what happens.’ ”
Now the first lesson for sceptics here is, if you contradict the enemy, they will give you a funny name. And people call me childish. But on to business. Is it bad not to have a control group? Yes. Read the academic paper via www.badscience.net/?p=188: they were looking at a lot of chronic cyclical conditions, or time-limited ones, like the menopause, where people get better with time. If 70% get better that’s meaningless. 99.99% of people who get a graze to their knee will get better and 99% of people who get a cold will get better. It’s not enough to know that they got better. We need to know if they got more better, or better faster, than people who weren’t having homeopathy.
And what about this business of “no exclusions” and “no bias”? These are simple technical terms from evidence based medicine, and in fact there were stark staring heinous examples of both “exclusions” and “bias” in this study, Grice. Where did the patients come from? They were selected as patients who wanted homeopathy, and so were positively disposed towards it: this is “selection bias”, picking subjects who will give you a positive result .
And what about the data collected: did they measure how patients were at baseline, and compare how they were later in time, at follow up? No, they just asked patients later to remember how they were when they first came, and decide retrospectively whether they thought they were any better: this will give you “recall bias”, and also another form of “information bias”, as patients give the doctors the answer they think they want or deserve.
Lastly, a large number of patients never came back after their first appointment: and so they were simply, er, ignored in the analysis. That “exclusion” is the very opposite of a “real world analysis”, otherwise known as an “intention to treat analysis”. Did they get worse? Did they get better? Did they go home and die? We will never know.
Send you bad science to email@example.com
Learn more on how to critically appraise research with How to Read a Paper: The Basics of Evidence-Based Medicine by Trisha Greenhalgh (BMJ Books).