Saturday July 21, 2007
There is no sense in which I am a hardliner on trials, and I’m totally down with the idea that there can be many different kinds of evidence, but one thing has always puzzled me: in these days of “evidence based thinking” in Whitehall, why don’t we do randomised controlled trials on social policy?
One statistician, Professor Sheila Bird, will hear this week if a grant has come through to do just that, and if the money appears it will be a first for the UK. How deep is the problem?
A while ago we introduced Drug Treatment and Testing Orders, under which offenders attend a clinic for rehabilitation instead of serving a custodial sentence. In 2003, just moments after they were launched, 21,000 convicted offenders had been given DTTOs. But because there was no randomised trial to see if they faired better than those sent to prison nobody really knows whether DTTOs work or not, for reducing re-offending, or drug taking, or anything.
Like all the best problems, the barriers are institutional and historical: and the objections raised against trials in social policy are exactly the same as those raised in medicine 40 years ago.
Judges will say, as doctors once did: we have expertise, we know what works for an individual. Interestingly, there is a way to test this, too, in a trial. You divide prisoners awaiting sentence into two groups: one is randomised to either DTTO or custodial, the other fielded into DTTO or a custodial sentence at the judge’s discretion. Then measure whatever outcomes you think are important between the “judge decision” group and the “randomised” group, and there’s your answer on judges’ discretion.
But even with discussions about sentencing, judges may be in denial about the extent of disagreement within their profession. Iain Chalmers, founder of the Cochrane Collaboration and champion of evidence-based medicine, once described putting several obstetricians in a room – all absolutely certain they knew best for individual patients, and who objected to trials – simply so they could witness the disagreement and certainty of their colleagues.
There is also wider professional resistance: social policies feel good, like alternative therapies; but, like alternative therapies, most policies don’t work. In the field of recidivism, even from uncontrolled studies there aren’t many successful interventions.
When you know, deep down, that your intervention doesn’t work, then you’re not going to subject yourself to public exposure through a randomised trial. And if you really believe your policies are effective, then when the results come back negative you’re simply going to argue that trials aren’t the right way to test a policy. For some cod-philosophical reason or another. Just like the quacks. Medicine, perhaps through the ubiquity of death, has learnt to accept failure better. Most of the drugs developed don’t work either, but we don’t say “let’s stop bothering”; we say, “great, we’ve stopped wasting money and side effects on ineffective treatments”.
But lastly, people will object, because even when you’re subjecting citizens to an experimental new policy, flagging the uncertainty with a trial reinforces the uncertainty. This was a key issue with early trials in medicine, and we overcame our irksomeness decades ago, right up at the front of the game. The crucial early trials on surgical options for breast cancer were highly emotionally charged, but necessary, simply because the options were equipoised.
If we can do RCT’s on something as horrifying as whether women have their breasts removed then we can do RCT’s on social policy. Best of luck Prof Bird.
· Please send your bad science to email@example.com