Ben Goldacre, The Guardian, Saturday 14 May 2011
Politicians are ignorant about trials, and they’re weird about evidence. It doesn’t need to be this way. In international development work, resources are tight, and people know that good intentions aren’t enough: in fact, good intentions can sometimes do harm. We need to know what works.
In two new books published this month – “More Than Good Intentions” and “Poor Economics” – four academics describe amazing work testing interventions around the world with proper randomised trials. This is something we’ve bizarrely failed to do at home.
Is business training useful? There’s a randomised trial on it in Peru. What about business mentors? In Mexico, they ran a randomised trial. Now think about all the different initiatives in the UK to support small businesses, or to help people find work. Do they work? No idea: you can have no clear idea.
Randomised trials are our best way to find out if something works: by randomly assigning participants to one intervention or another, and measuring the outcome we’re interested in, we exclude all alternative explanations for any difference between the two groups. If you don’t know which of two reasonable interventions is best, and you want to find out, a trial will tell you.
Microfinance schemes help small producers buy in bulk to make larger profits, and they change lives. But are group-liability loans better, because people default less, so the project is more sustainable? Or do anxieties about shared reponsibility restrict recruitment? Some academics ran a trial.
Do free uniforms improve school attendance, especially in pupils who don’t own one at all? Someone ran a trial. Contingent payments improve attendance: but what’s the best time to pay, and how? There’s another trial. What about streaming in Kenyan schools, with high and low ability classes? Do all kids do better? Someone ran a trial. Maybe different strategies to encourage saving work best in different places? Innovations for Policy Action ran a series of trials to find out, in the Philippines, in Bolivia, in Peru.
I won’t tell you the results, for any of those projects: because this isn’t about good news on what works, or bad news about what doesn’t. What matters is that someone ran a randomised trial and found the answer.
This week the papers and parliament were filled with uninformed wittering on sex education. If the goal is to delay sexual activity, or reduce sexually transmitted infections, and you don’t know what age to start, or what to teach, then stop wittering: define your outcome, randomise schools to different programmes, and you’ll have the answer by the end of next parliament.
Do long prison sentences work? At the moment sentences are hugely variable anyway: so randomise properly and run a trial. Different teaching approaches? Run a trial. Harder exams? Run a trial. Job-seeking support? Run a trial. This isn’t rocket science: the first trial was in the bible. And I’m certainly not saying these are the best UK policy trials you could run. In fact, the most important part of evidence based policy is identifying where there is uncertainty.
So here is my fantasy. We sack the Behavioural Insights Team – all they’ll do is overextrapolate from behavioural economics research – and open a Number Ten Policy Trials Unit instead.
They sit down to write a giant list of unanswered questions, for situations where we don’t know if an intervention works: this will be most of them. Then we filter down to questions where a randomised trial can feasibly be run. Then we do them.
This won’t cost money: it will save money, in unprecedented amounts, by permitting disinvestment in failed interventions, and it will transform the country. It’s efficient, it’s sensible, and it will never happen, because politicians are too ignorant of these simple ideas, too arrogant to have their ideologies questioned, and too scared – let’s be generous – of hard data on their good intentions.