You are 80% less likely to die from a meteor landing on your head if you wear a bicycle helmet all day.

November 15th, 2008 by Ben Goldacre in adverts, bad science, presenting numbers, statistics | 24 Comments »

We’re all suckers for a big number, and you’ll be delighted to hear that the Journal of Consumer Research has huge teams of scientists all eagerly writing up their sinister research on how to exploit us.

One excellent study this month looked at how people choose a digital camera. This will become relevant in three paragraphs’ time. The researchers took a single image, then processed it in Photoshop to make two copies: one where the colours were more vivid, and one where the image was sharper. They told participants that each image came from a different camera, and asked which they wanted to buy. About a quarter chose the one with the more colourful sharper image.

Then the researchers started to pile it on. Firstly they said that this camera had more pixels, using a figure derived from the diagonal width of the sensor: suddenly more than half picked it instead. Then, crucially, they told them that this camera had more pixels, but this time, they used the number of pixels as evidence: a figure measured, as you know, in millions. Suddenly, three quarters chose the supposedly better camera. Just a bigger number. Nothing more.

This week you’ll have noticed the news on rosuvastatin (or Crestor, since either through ignorance, or corporate whoredom, the media love to help drug companies by using their corporate brand names instead of the generic). The JUPITER trial on rosuvastatin has just reported, several months early, and most papers called it a “wonder drug”. The Express, bless them, thought it was an entirely new drug.

“Heart attacks were cut by 54 per cent, strokes by 48 per cent and the need for angioplasty or bypass by 46 per cent among the group on Crestor compared to those taking a placebo or dummy pill”, said the Daily Mail. Dramatic stuff. And in the Guardian, we said: “Researchers found that in the group taking the drug, heart attack risk was down by 54% and stroke by 48%”.

Is this true? Yes. Those are the figures on risk, expressed as something called the “Relative Risk Reduction“. It is the biggest possible number for expressing the change in risk. But 54% lower than what? This was a trial looking at whether it is worth taking a statin if you are at low risk of a heart attack (or a stroke), as a preventive measure: it is a huge market – normal people – but these are also people whose baseline risk is already very low.

If you express the exact same risks from the same trial as an “Absolute Risk Reduction“, suddenly they look a bit less exciting. On placebo, your risk of a heart attack in the trial was 0.37 events per 100 person years, and if you were taking rosuvastatin, it fell to 0.17 events per 100 person years. 0.37 to 0.17. Woohoo. And you have to take a pill every day. And it might have side effects.

And if you express the risk as “Numbers Needed To Treat“, probably the most intuitive and concrete way of expressing a benefit from an intervention, then I reckon, from the back of this envelope in front of me (they naughtily don’t even give the figure in the research paper), that a couple of hundred people need to take the pill to save one life.

Is it a good idea for you personally to take rosuvastatin? That’s not my job here – get over yourself, we’re allowed to talk about ideas – but the way figures are presented can have a huge impact on decisions everyone makes, and this is not idle speculation. In fact the phenomenon has been carefully studied, in many groups, and for many years.

In 1993 Malenka et al recruited 470 patients in a waiting room, and gave them details of a hypothetical disease, and a choice of two hypothetical treatments. In fact it was the same treatment, with the risk expressed in two different ways. 56.8% chose the medication whose benefit was expressed as a relative risk reduction, while only 14.7% chose the medication whose benefit was in absolute terms (15.5% were indifferent).

Are patients uniquely stupid? Joy, no. In fact the exact same result has been found repeatedly in experiments looking at doctors’ prescribing decisions, and even the purchasing decisions of health authorities.

We all love big numbers, and we’re all fooled by big numbers, because we’re all idiots. That’s why it’s important to think clearly, and ignore all newspapers.


++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

24 Responses



  1. Picklish said,

    November 15, 2008 at 8:05 am

    1 chance of a heart attack every 300 years you live versus 1 chance of a heart attack every 600 years you live??? but…are we talking older people, or those with high fatty diets? Or living in Scotland like me?

    sure, the figures make you expect more of a difference than there actually is, in real terms, but there is still a difference, no?

    but then, are you claiming that it is statistically significant? using a formula devised by…..?

    when it comes to a whole population, is it not impossible to study every variable, so one has to take ‘educated guesses’?

    i don’t know, these are just questions, i just enjoy this internet-response-board-thing to help me learn and understand

    small increments in recovery/prevention/success rates? are they not important too?

  2. humber said,

    November 15, 2008 at 10:34 am

    Picklish,
    At the risk of talking about statistics, “cuts risk by 50%” is meaningless unless you know the background risk.
    Expressing the same as a ratio of deaths per year, gives some idea of the improvement offered over doing nothing, and a means of comparing that risk with others that are so often ignored.

    In this case, the benefit must also be weighed against the long term use of the drug. Rather difficult to asses a risk that may take a lifetime to collate.

    Cyclists often imagine that helmets are useful because they think only of head injuries, but in reality, you don’t get to choose the way you would be injured. A case of selective risk assessment.

  3. jodyaberdein said,

    November 15, 2008 at 10:39 am

    Having yet to actually read the paper (it is early on Saturday after all), I’d be inclined to agree that the first point is pretty important. Often trials are done on fairly unrealistic patient groups. Presumably they excluded those with heart disease but otherwise took all comers.

    I think the point is that there is a genuine difference, and that difference is being oversold by relying on inherent weaknesses in how people react to evidence.

    Regarding populations and educated guesses however: the beauty of randomisation is that it evenly spreads out all the differences that you can’t even begin to guess about across both groups you are studying, so they no longer matter. Cool eh?

    Jody

  4. drunkenoaf said,

    November 15, 2008 at 11:27 am

    Wait till you see one risk reduction value in a paper, then the p-value for the other. And a tiny asterisk directing you to a footnote to explain…

  5. polly said,

    November 15, 2008 at 11:59 am

    ‘ignore all newspapers’

    Since (with a few honorable exceptions) they’re filled with lazy opinion by non-experts (not just science – but most subjects which deserve more than 10 minutes thought) sounds good to me.

  6. briantist said,

    November 15, 2008 at 12:50 pm

    Also, any chance you can fix the date and times of postings back to GMT now BST is over? It’s 10:50am now but the clock on badscience.net seems to have gone the wrong way…

  7. tialaramex said,

    November 15, 2008 at 12:52 pm

    This big number syndrome is at its most obvious in high technology applications like the mentioned digital cameras or computer peripherals, or digital audio.

    CDs for example, use 16-bit PCM samples at 44.1kHz. Today you can get audio storage formats that are 24-bit PCM at 96kHz. Surely more than twice as good? But no. CDs can accurately store frequencies up to the plausible threshold of human hearing and with a dynamic range from barely audible to annoying or even painfully loud. If you want an upgrade you need to look at your ears or brain, not the CD.

    The additional bits in a 24-bit system mean a wider dynamic range suitable to produce sounds that are inaudibly quiet, or else that are instantly deafening, or both. What use is that? The additional frequency range means that, if not for the fact that your HiFi system isn’t designed by or for dogs, you could use it to produce high-pitched sounds you can’t hear. Again useless.

    But the numbers are bigger, and that’s enough for some people to believe (hypothesis tested & rejected in good double blind trials) that they hear an improvement.

    [ NB In audio processing and during production there are real reasons for having (a few) more bits, and even arguably more frequency range, and of course both are also useful in research, but I'm talking about things like CD that are aimed at music listeners, not technicians ]

    We can’t just tell people that the numbers don’t matter. 16-bit PCM really was worth having compared to earlier 8-bit and 12-bit systems. The trouble is that the real world is never simple enough to be summed up by one number.

  8. Sili said,

    November 15, 2008 at 3:10 pm

    Well, since dr Goldacre is an NHS doctor, duh! he works for the government!

    Really. Some people.

  9. twaza said,

    November 15, 2008 at 4:59 pm

    I calculated the NNTb for deaths from any cause, and used the data in the paper’s Table 3. There were 198/8901 deaths in the rosuvastatin group and 247/8901 deaths in the placebo group. The NNTb = 182 (95% CI 99 to 1088).

    I am not sure what period the NNT relates to (the median follow up of 1.9 years?)

    Whatever, the 95% confidence interval is so wide that one can’t be very confident in the NNT.

  10. Tina Russell said,

    November 15, 2008 at 8:58 pm

    Oooh! You shoulda seen the Colbert Report on Thursday:

    www.colbertnation.com/the-colbert-report-videos/210357/november-12-2008/cheating-death—women-s-health

    At about 2:10 is when he talks about rosuvastatin and comes to about the same conclusion.

  11. mjs said,

    November 15, 2008 at 9:05 pm

    I think diabetes is one of those side effects.

  12. gazza said,

    November 15, 2008 at 10:16 pm

    By a bizarre coincidence my GP has asked me to take part in a trial looking at the effects of a low aspirin dose (1/6 of a tablet – more details to follow) on future incidences of stroke and heart attack. I would put myself in the relatively fit, ‘middle aged’ category. What’s the betting that this trial will end up the same as the rosuvastatin study that Ben refers to?

    I surprised my GP at discussing with him how the outcome might be presented in terms of relative and absolute risk statistics!

    From reading Ben’s column and book I look forward to discussing the marvels of the placebo effect with him, as well as considering trial protocols. Should be fun!

  13. julie oakley said,

    November 16, 2008 at 4:58 pm

    As an older mother one of the kinds of statistics that I think is abused by the health service is the risk of having a Downs Syndrome child. Mothers are always informed that they will have a 1 in whatever chance of having a child with Downs Syndrome (in my case I think it was one in ten) which sounds pretty worrying to the average prospective parent. However they are never informed of the converse (ie in my case a 90% chance of not having a Downs Syndrome child)

  14. njd said,

    November 17, 2008 at 1:15 pm

    90% fat-free sounds better than 10% fat.

    That Uni. of Chicago study is behind a paywall, but one of the authors has it available at faculty.chicagogsb.edu/christopher.hsee/vita/Papers/SpecificationSeeking.pdf

  15. beast9 said,

    November 18, 2008 at 1:42 am

    Is the post title about bike helmets a real statistic?

  16. heavens said,

    November 19, 2008 at 12:18 am

    Dr*T,

    You seem to have forgotten people don’t go into journalism because they’re able to understand statistics. If they were good with math or science, they’d have chosen a different career.

  17. Picklish said,

    November 19, 2008 at 12:11 pm

    Yeah, i agree heavens. Not even everyone who works in a scientific field understands statistics well. Although, it’s not necessary to understand how concepts such as multiple regression and chi squared and other exciting sounding statistical tricks work, but rather how and when to use them, which is completely different (and much easier). But as Dr G. says, we’re all idiots, and we should definately ignore all newspapers…

  18. mikey baby said,

    November 19, 2008 at 6:28 pm

    Did anyone notice the baseline risk of the placebo group?

    More men +0.6%
    More smokers +0.3%
    More metabolic syndrome +0.8%
    Family history of premature CHD +0.6%

    AZ were clearly paying attention when you lectured them on tricks big pharma employ.

    This could explain the greater than expected benefit…..

  19. cellocgw said,

    November 19, 2008 at 8:37 pm

    Re bicycle helmet:
    You don’t get to choose your injury, but you do get to analyze the cost-benefit ratio.
    A broken leg or collarbone (both common-ish in bike crashes) are painful but generally noncrippling.
    It only takes a minor cranial injury to mess you up, for life, in ugly ways.

  20. Robert Carnegie said,

    November 21, 2008 at 4:37 am

    Bicycle helmets are not useful to organ transplant surgeons. They contribute to a shortage of raw materials.

  21. quin said,

    March 16, 2009 at 7:43 am

    I once heard the statistic that you’re more likely to be hit by a meteorite than to win the lottery jackpot. plenty of people have won the lottery. nobody has ever been hit by a meteor

    Why do they say “meteoric rise. meteors fall, don’t they?

  22. octavedoctor said,

    June 28, 2009 at 3:23 pm

    Dan, if you see a statistic saying that there has been a 50% reduction in the deaths of cyclists involved in RTAs on a particular stretch of road since the introduction of speed cameras you would be inclined to believe that it is the presence of the speed camera that is responsible for this. However if you were told that in the preceding measurement period of say, a year, there were only 3 RTAs involving cyclists and two deaths. but that in the subsequent measuring period there were 3 RTAs and only one death you might have a different perspective on it. If you were then told that the subsequent measuring period was just three months, instead of the year used previously, you might have yet another perspective. If it then transpired that around 100 cyclists used the road intermittently during weekdays to travel to and from work, resulting in 50000 potential incidents per year then would you still see that 50% as significant evidence of the effect of the speed camera?

    The point is that you have to evaluate the significance of the numbers against the background data, rather than in isolation. RRD encourages you to ignore the background data.

  23. paladin said,

    November 18, 2009 at 1:27 pm

    wow! they should be made to atleast include the word “relative” while giving the big percentage figures of risk reduction!; that should ring some alarm bells for some people at least.

  24. jiang said,

    December 22, 2009 at 5:21 am

    ed hardy ed hardy
    ed hardy clothing ed hardy clothing
    ed hardy shop ed hardy shop
    christian audigier christian audigier
    ed hardy cheap ed hardy cheap
    ed hardy outlet ed hardy outlet
    ed hardy sale ed hardy sale
    ed hardy store ed hardy store
    ed hardy mens ed hardy mens
    ed hardy womens ed hardy womens
    ed hardy kids ed hardy kids ed hardy kids

You must be logged in to post a comment.