Blueprint fail

September 19th, 2009 by Ben Goldacre in bad science, evidence, evidence based policy, politics, schools | 14 Comments »

Ben Goldacre, 19 September 2009, The Guardian

This week at a debate in the Royal Institute I was told off by the science minister for not praising good science reporting, because journalists – famously kind to their targets – are sensitive to criticism. So before we dismantle this Home Office report on drugs policy, can I just say I’m sure they’ve probably produced some other perfectly acceptable reports, and I shouldn’t like any brittle souls in government to be dispirited by the criticisms that will now follow.

The Blueprint programme is an intensive schools intervention to reduce problematic drug use, and a lengthy research project to see if it works – costing at least £6m – finished some years ago. We have been waiting for the results ever since, and this quote from Vernon Coaker, then Minister for Drugs & Crime Reduction, explains what we have been waiting for: “The Blueprint drugs education programme is designed to evaluate the effectiveness of a multi-component approach to school-based drug education… The programme is currently being evaluated to determine its impact on all drug use.”

This is odd. Because the minister said that in October 2006, but as early as 2002, before the study even began, the government had been told that their research was incapable – by design – of telling us anything useful about the effectiveness of the Blueprint intervention. The report is now out, and it even admits, in its own executive summary, that they always knew it was incapable of giving any such information.

They explain that after starting off with the idea of doing a big randomised trial, they were told they’d need 50 schools to get a sufficiently large sample, when they could only do 23, but they decided to go ahead with 23 schools anyway as some kind of gigantic pilot study, which could gather information about whether it was possible to do a proper trial of Blueprint. This, already, is a bizarre explanation, since a pilot study would not need £6m, or 23 schools, and in the end, to do the job properly, they would then wind up paying for 73 schools to be studied in total, instead of 50. There were also offers of advice from experts in trial design, such as Professor Sheila Bird of Cambridge University, who offered to help them do a meaningful trial on the available budget. This did not happen.

Then it gets even stranger. They have data from 6 normal “comparison” schools which aren’t receiving the Blueprint intervention, but they’ve not been randomised or matched properly, so you can’t use them to make any comparisons. So you bin the data. In fact, you don’t even need to collect it in the first place, because you know, before you start, that it’s no use. You’ve been told that. You understood it.

But no. Instead, the report goes for a strange cop-out: “While it was still planned that the local school data would be presented alongside the Blueprint school data to enable some comparisons to be drawn between the two samples,” they say: “recent academic and statistical reviews concluded that to present the data in this way would be misleading, given that the sample sizes are not sufficient to detect real differences between the two groups.” So you binned it? “Instead, findings from the local school data are presented separately in the report to provide some context to this work but do not act as a comparison group.” This is absurd.

And it’s not as if this was an impossible project: randomised trials of educational interventions are done, and sometimes very well. The SHARE trial was designed to discover whether a specific new sex education programme could help prevent unwanted teenage pregnancies. It was conducted in 25 schools and cost about £1m (although remember the intervention itself may have been cheaper than Blueprint). Schools were randomly assigned to give either the SHARE programme, or the normal sex education that everyone gets. Then, at age 20, young women were followed up to see if they had got pregnant, had a termination, and so on. There were no differences between the two groups, and this was a useful finding, because it stopped us wasting money on something that doesn’t work.

Did anything that good come out of the Blueprint study? Well, it was a very gigantic pilot study, so they now know a lot about things like “can you practically give the Blueprint programme in schools” (yes, you can) and “do parents like their children being taught about the risks of drugs” (yes, they do). And the Blueprint report also celebrates the fact that knowledge about drugs was good in the children they taught (although of course, they had nothing to formally compare them with). This sounds great, but in reality, improvements on this kind of “surrogate outcome” are often unrelated to real world benefits: the SHARE trial, for example, found that knowledge about pregnancy was improved, but rates of teenage pregnancy still remained unchanged.

And we should also notice, finally, that in the Blueprint trial, rates of drug use were often just a little higher among those children who did receive the special drugs education programme than among those in the non-comparable comparison group. These results are meaningless, of course, because as they’ve admitted, from the very outset this £6m trial was not designed in such a way that we can make such a comparison. But we can only speculate whether the Home Office would have been so abstemious about rigour if the flawed results from this inadequate trial had suggested their expensive new idea actually worked.

References:

All references as links in the text above as ever. If you’re interested in reading some early feedback in which they were told about the problems then I recommend this 10mb pdf:

www.mrc-bsu.cam.ac.uk/Publications/PDFs/SampleSizeCalculations2002.pdf


++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

14 Responses



  1. Bishop Gillian Wakefield said,

    September 19, 2009 at 1:30 am

    If anyone’s interested, I have the same file (SampleSizeCalculations2002) as a 2.7MB pdf. I’d post it, I just don’t know where.

  2. misterroy said,

    September 19, 2009 at 6:47 am

    www.humyo.com will host te file,
    Upload your file, click on the small box on the files icon and then “Get link to File” hit the “make public”
    copy the link, and post link here

  3. Thimble said,

    September 19, 2009 at 9:24 am

    It’s “Royal Institution”

  4. Thimble said,

    September 19, 2009 at 9:32 am

    I meant, <pedantry> It’s Royal Institution” </pedantry>

  5. Synchronium said,

    September 19, 2009 at 9:43 am

    The government are useless when it comes to drugs policy.

    The next lot of “legal highs” to be banned are being banned because of their “potential harm” (meaning they haven’t killed anyone yet, or at least not without copious amounts of alcohol and other illegal drugs in their system first), while alcohol and tobacco kill thousands of people each year. I’m sure the cost to the UK from alcohol is something like £3bn a year.

    It’s hard to take anything they say about drugs seriously when it’s clear they don’t value evidence *at all*.

  6. Bishop Gillian Wakefield said,

    September 19, 2009 at 12:25 pm

    Done.
    www.humyo.com/F/9812625-1402364629

  7. rkane said,

    September 19, 2009 at 1:41 pm

    Guardian – Corrections and clarifications

    Ben Goldacre’s Bad Science gets it wrong shock? What’s all this about then?

    From The Guardian Corrections column, Saturday 19 September 2009 www.guardian.co.uk/theguardian/2009/sep/19/corrections-clarifications

    Bad Science:
    Magnetism, mystery and plain muddle, 20 June, page 16, we wrongly identified The Times as the publisher of an article that appeared in The Sunday Times on 14 June with the headline: Oceans charge up new theory of magnetism. Bad Science criticised the way The Sunday Times reported research by Professor Gregory Ryskin, of Northwestern University in the US, because his paper did not, as The Sunday Times claimed, say that Earth’s magnetic field may be produced by ocean currents. Prof Ryskin suggested, instead, that small fluctuations in the field may be related to the movement of oceans. Unfortunately, when we edited Bad Science we removed a sentence, included in the copy submitted to us, which reported that The Sunday Times said Prof Ryskin had approved its coverage. We apologise for this error.

    In the same Bad Science column we made the mistake of saying that the Scottish Daily Express, rather than the Scottish Sunday Express, published a story about survivors of the Dunblane school shooting based on material taken from social networking sites. Bad Science made an analogy between the Scottish Sunday Express’s use of private information about the lives of the Dunblane teenagers and The Sunday Times’s use of a science paper posted by Prof Ryskin, for discussion by the science community, on a pre-publication internet archive several years ago. The Sunday Times has complained that this was unfair. We accept that the extent to which that comparison was open to argument would have been clearer if we had included the response from The Sunday Times. The Sunday Times interviewed Prof Ryskin in connection with its report; it also showed him a draft (though not the final version) of its report before publication and took some of the changes he requested into consideration.

  8. fontwell said,

    September 19, 2009 at 3:01 pm

    You have to conclude that this sort of thing goes on due to some internal political reasons somewhere as it can have no useful impact on the general public. But I would love to know why they really go ahead with these things. That, and invading Iraq.

  9. Diversity said,

    September 19, 2009 at 5:24 pm

    Ah well, at least the project produced an excellent paper in 2002 on many of the experimental design questions; and the government have actually released it without fighting a Fredom of Information Act request!

    The only reason I can think of for continuing the study after the commentary on its design had been received is that there were people deciding the question with a wistful hope that the very low significance comparisons would produce propaganda material in favour of the initiative. (Instead, the result moved the Bayesian prior very slightly – insignificantly – in the direction of discrediting the Blueprint approach.)I suggest that the people who took the decision to continue need to pass a compulsory course in the basis of statistical reasoning. Darell Huff’s booklet ‘How to Lie with Statistics’ would be suitable text.

  10. JackH said,

    September 20, 2009 at 12:43 pm

    Sounds like the Home Office analysts had sufficient influence to ensure that the report was accurate (and that was probably a massive achievement in itself, given the environment in which I suspect they’re operating) but had bog-all influence over the way the research was designed in the first place.

    Why do these things happen? I suspect that there’s often a toxic mix of (a) basic ignorance and (b) incentives to do the wrong thing – it’s cock-up and conspiracy. If you’re a manager and you negotiated your project a £6m budget, you’ve little incentive to give up “your” money, or to even listen to arguments that you could achieve the same outcomes with less cash. Your budget is of course a measure of how important you are amongst your peers and a bigger budget can mean faster career progression – it looks good on your CV.

    And, back in the boom, when this one started, a big budget was better for ministers – spending was always investment, and investing £6m means you’re committed to this important issue. It would hardly have been worth an announcement (an end in itself) if it had been a few hundred grand on a tiny pilot. (One good side to the fiscal crisis is that these incentives for both civil servants and ministers to maximise spending must now be reducing somewhat.)

    In this case, maybe the advice from Prof Bird arrived at the wrong time. Once a minister has given something the nod, it would be pretty counter-cultural to re-visit that and also bad for your career to say, look, we got this wrong, we should do it this way instead. And the ignorance just exacerbates things; the briefing for Coaker was probably written by someone who thought that if you’re trying something then that’s a trial.

    The perverse incentives might not matter so much if there were just a little more basic scientific literacy and numeracy – so if a minister asks “how many people will this trial help?” someone in the room has the minimal amount of brains and balls required to say “Well minister, we don’t know how many people it will help because we don’t yet know if it will actually help. That’s why we’re doing a trial and having a control group. But we’re looking at a sample size of X”, or whatever, rather than “We’re looking at running this exciting new programme in 23 schools minister, which means that X children would be helped”

    The policy leads often lack the basics and the analysts often don’t get in the room. I suspect that, often, if they’re considered at all, they’re condemned as too purist and kept out of the loop until after decisions have been made – so they can’t advise but can only mop up and push for caveats to be appended to rotten reports.

    Perhaps it’s culturally seen as heartless to suggest spending less on doing stuff that might help and spending more on finding out what stuff would help. Perhaps part of the problem is that we live in a democracy and people like the stuff that might help, and ministers know they like it.

    I’ve no idea how the civil service science problem can be fixed – although there are, to be fair, multiple programmes ongoing.

  11. WilliamOfCrockham said,

    September 23, 2009 at 8:52 pm

    Why did they go ahead, knowing the data would be useless? It’s a mystery. Surely the £6m couldn’t have had anything to do with it.

  12. J.McGuinness said,

    September 26, 2009 at 3:53 pm

    Long time reader, first time commenter.

    Mark Easton has been covering this story over on the BBC News website, for the last week or so. He’s revealed that the Minister in charge was the current Defence Secretary, Bob Ainsworth.

    The Academics who gave the advice, regret now that they didn’t push their reservations about the design of the trials/evaluation further than they did. They believed, perhaps naively, that when they gave advice back in 2002 (stating that the research would be pointless) that that would be the end of the evaluation. As we have seen, it wasn’t.

  13. wayscj said,

    November 21, 2009 at 6:39 am

    ed hardy ed hardy
    ed hardy clothing ed hardy clothing
    ed hardy shop ed hardy shop
    christian audigier christian audigier
    ed hardy cheap ed hardy cheap
    ed hardy outlet ed hardy outlet
    ed hardy sale ed hardy sale
    ed hardy store ed hardy store
    ed hardy mens ed hardy mens
    ed hardy womens ed hardy womens
    ed hardy kids ed hardy kids ed hardy kids

  14. Windows 7 Professional said,

    December 21, 2009 at 8:37 am

    Big Discount! Microsoft Office 2007 $110 and Windows 7 $139 on www.software-hotbuy.com/, Office 2007 Ultimate
    Office Professional 2007
    Office 2007 Professional
    Windows 7 Professional
    Windows 7 Ultimate
    windows vista ultimate
    Windows Vista Business
    Flash CS4
    Illustrator CS4
    Photoshop cs4
    Master cs3
    Acrobat 9
    Dreamweaver cs3