Testing Social Policy

July 21st, 2007 by Ben Goldacre in statistics | 34 Comments »

Ben Goldacre
Saturday July 21, 2007
The Guardian

There is no sense in which I am a hardliner on trials, and I’m totally down with the idea that there can be many different kinds of evidence, but one thing has always puzzled me: in these days of “evidence based thinking” in Whitehall, why don’t we do randomised controlled trials on social policy?

One statistician, Professor Sheila Bird, will hear this week if a grant has come through to do just that, and if the money appears it will be a first for the UK. How deep is the problem?

Article continues
A while ago we introduced Drug Treatment and Testing Orders, under which offenders attend a clinic for rehabilitation instead of serving a custodial sentence. In 2003, just moments after they were launched, 21,000 convicted offenders had been given DTTOs. But because there was no randomised trial to see if they faired better than those sent to prison nobody really knows whether DTTOs work or not, for reducing re-offending, or drug taking, or anything.

Like all the best problems, the barriers are institutional and historical: and the objections raised against trials in social policy are exactly the same as those raised in medicine 40 years ago.

Judges will say, as doctors once did: we have expertise, we know what works for an individual. Interestingly, there is a way to test this, too, in a trial. You divide prisoners awaiting sentence into two groups: one is randomised to either DTTO or custodial, the other fielded into DTTO or a custodial sentence at the judge’s discretion. Then measure whatever outcomes you think are important between the “judge decision” group and the “randomised” group, and there’s your answer on judges’ discretion.

But even with discussions about sentencing, judges may be in denial about the extent of disagreement within their profession. Iain Chalmers, founder of the Cochrane Collaboration and champion of evidence-based medicine, once described putting several obstetricians in a room – all absolutely certain they knew best for individual patients, and who objected to trials – simply so they could witness the disagreement and certainty of their colleagues.

There is also wider professional resistance: social policies feel good, like alternative therapies; but, like alternative therapies, most policies don’t work. In the field of recidivism, even from uncontrolled studies there aren’t many successful interventions.

When you know, deep down, that your intervention doesn’t work, then you’re not going to subject yourself to public exposure through a randomised trial. And if you really believe your policies are effective, then when the results come back negative you’re simply going to argue that trials aren’t the right way to test a policy. For some cod-philosophical reason or another. Just like the quacks. Medicine, perhaps through the ubiquity of death, has learnt to accept failure better. Most of the drugs developed don’t work either, but we don’t say “let’s stop bothering”; we say, “great, we’ve stopped wasting money and side effects on ineffective treatments”.

But lastly, people will object, because even when you’re subjecting citizens to an experimental new policy, flagging the uncertainty with a trial reinforces the uncertainty. This was a key issue with early trials in medicine, and we overcame our irksomeness decades ago, right up at the front of the game. The crucial early trials on surgical options for breast cancer were highly emotionally charged, but necessary, simply because the options were equipoised.

If we can do RCT’s on something as horrifying as whether women have their breasts removed then we can do RCT’s on social policy. Best of luck Prof Bird.

· Please send your bad science to bad.science@guardian.co.uk


++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

34 Responses



  1. le canard noir said,

    July 21, 2007 at 1:03 am

    It could redefine politics. Imagine a party of evidence-based policy vs. a party of ‘alertnative’ policy. Then again, we still need to find concensus on what we want to achieve in society through policy.

  2. AitchJay said,

    July 21, 2007 at 2:52 am

    That’s a bloody interesting article Ben, well done.

  3. Kimpatsu said,

    July 21, 2007 at 3:29 am

    According to David Fraser in “A Land Fit for Criminals”, sentencing at the judge’s discretion is all about keeping costs down, rather than preventing recidivism or punishing wrongdoing. Consequently, any RCT that doesn’t take that as the objective is going to fail.

  4. superburger said,

    July 21, 2007 at 8:07 am

    “one is randomised to either DTTO or custodial”

    how do you get a criminal to consent to going to prison rather than on a DTTO?

    If you don’t tell them they’re in prison on the luck of the draw then that’s slightly immoral.

    If you do tell them they’re on a treatment trial and they’re in prison for that reason, and had a 50/50 chance of ending up on a DTTO then the anger/resentment/frustration could results in them increasing chaces of re-offence.

  5. stever said,

    July 21, 2007 at 9:41 am

    Prof Bird on drug policy:

    “In the face of all the evidence, thorough research into the possibility of legalisation is the only intelligent thing to do.”

    Professor Sheila Bird, Principal
    Statistician at the Medical Research Council Biostatistics Unit.

    Cambridge Evening News March 2006

  6. dbhb said,

    July 21, 2007 at 9:44 am

    le canard noir said:
    “It could redefine politics. Imagine a party of evidence-based policy”

    I’ve been wanting *that* for ages.

    Ben for PM!

  7. Tim Worstall said,

    July 21, 2007 at 10:17 am

    “But because there was no randomised trial to see if they faired better than those sent to prison”

    Is prison supposed to make you blonder (or not, as the case may be)?

    On the larger point, much of economics (and for all I know, sociology too) is a search for proper trials without actually assigning people to them. Hong Kong grew like gangbusters (as did Taiwan), Mainland China less so up to 1979. We might thus conclude that having a homicidal maniac insisting upon backyard steel production grows an economy less well than buggering off and leaving people to it. Or less well than the fascism lite of Taipei.

  8. Robert Carnegie said,

    July 21, 2007 at 11:01 am

    Postcode prescribing. Can’t that be used for clinical and social policy trials? And, as we know, no one is going to object.

    There is a huge problem of fairness. Human rights legislation probably makes most social policy trials that you could imagine, illegal as acts of the government.

    The main exception is application of different legislation in Scotland, Wales, and Ireland. The “community charge” for instance – you know, the poll tax.

    You’re also going to have people moving around to get into or out of the social policy experiment zones. Although most Scots, Welsh etc. do draw the line at that.

    That probably covers drug sentencing, too. But frankly I’d be astonished if random sentencing trumps judges’ discretion. It’s not a very good example. Are you perhaps hungry for some letter page action?

  9. Will said,

    July 21, 2007 at 11:11 am

    Ben, if you want some info on an RCT feasibility study for interventions for offenders my old company was involved in (Matrix Research and Consultancy) for the Home Office then let me know, I will contact the CEO and see if they would be able to send you a copy. Would be interesting to examine their conclusions in light of your above post.

  10. Deano said,

    July 21, 2007 at 11:12 am

    If all policies were evidence based, that wouldn’t really leave politicians a lot to fight over would it?

  11. RS said,

    July 21, 2007 at 11:22 am

    I don’t know why RCTs should make any difference. The government ignores the available evidence in a whole range of areas at the moment. The problem is an institutional one – policy makers (Oxbridge humanities graduates to a man ;-)) regard the statisticians and their ilk as simply an adjunct whose job is to provide amunition to further whatever policy agenda has already been agreed upon. They take a rather post-modern approach to evidence and data, seeking to twist and distort it to support their position, rather than regarding it as a reflection of the true state of the world.

  12. Gimpy said,

    July 21, 2007 at 11:39 am

    One of the reasons why RCTs are not used is because ultimately it is the public who decide on the direction of politics by voting. Should we remove the vote and use RCTs to determine which party is most fit for office?

  13. Will said,

    July 21, 2007 at 12:19 pm

    This feasibility study by David Farrington is publicly available:

    www.homeoffice.gov.uk/rds/pdfs2/rdsolr1402.pdf

    I’ll try to get the other one.

  14. Will said,

    July 21, 2007 at 12:22 pm

    Unfortunately, I have been told by the boss at Matrix that their study has not been made public. It doesn’t look like it will be in the near future if ever, probably because of the perceived controversy of its subject matter.

  15. Will said,

    July 21, 2007 at 1:00 pm

    Another RCT feasibility study at a prison:
    www.homeoffice.gov.uk/rds/pdfs2/rdsolr0303.pdf

    Just for my 2p, there has been a lot of controversial research into the efficacy of restorative justice in the USA and Australia for different types of offender. The obvious problem with all the research is that it leads to a very publicly unpopular conclusion that different punishments may be more effective and cheaper for different inmates, i.e. male white inmates would get a custodial sentence, while female white inmates get restorative justice measures for the same offence – This rubs up against the public’s view of fairness and punishment, despite the potential increase in efficacy/harm reduction. Look up “Lawrence W. Sherman” and the “Jerry Lee Program in Randomized Controlled Trials of Restorative Justice” on Google if interested.

  16. Ben Goldacre said,

    July 21, 2007 at 1:36 pm

    the feasibility studies are great, and ive had a poke around on places like the campbell collaboration (social policy equivalent of cochrane) but does anyone have a back-of-an-envelope list of completed and published RCT’s in social policy? especially UK ones.

    testament to the culture gap is this: someone at the IPPR yesterday remembered there had been an RCT on something, but after chatting more it turned out it used last telephone digit to randomise subjects into one group or another. 20,000 people in the study apparently (cant remember what it was on) but then they mess up by using a schoolboy bogus method of randomisation.

    too weird.

  17. Don Cox said,

    July 21, 2007 at 5:18 pm

    “If all policies were evidence based, that wouldn’t really leave politicians a lot to fight over would it?”

    Yes. They will fight over where they want to go. Only the best way to get there would be decided by research.

  18. superburger said,

    July 21, 2007 at 5:54 pm

    “If all policies were evidence based……”

    Brave New World, etc etc.

  19. fropome said,

    July 21, 2007 at 7:24 pm

    Oldfart: Actually, as I understand it, just about all experimentation in social research is done in the USA… they are much bigger fans of it than elsewhere- certainly than the UK. I mentioned the Scared Straight programs above, but you could also for example look up the Minneapolis domestic violence experiment, and it’s follow ups elsewhere in the states.

  20. Dr* T said,

    July 21, 2007 at 10:19 pm

    Ben ben Ben, don’t tease me so….

    I’m afraid Stats is not my strong point (as if anything is), but would you mind expanding on the last digit telephone number snippet from #21 – why such a school boy bogus method of randomisation?

  21. Robert Carnegie said,

    July 21, 2007 at 11:04 pm

    On reflection – I’m ex-directory myself…

  22. wewillfixit said,

    July 21, 2007 at 11:25 pm

    I think we are starting to confuse random allocation (to treatment/control group) with random sampling of the population…

  23. wewillfixit said,

    July 21, 2007 at 11:50 pm

    Can also associated with lack of allocation concealment. True randomisation is always better than these quasi random allocation procedures.

  24. timsenior said,

    July 23, 2007 at 2:18 am

    “Then measure whatever outcomes you think are important ”

    And surely there’s the rub, at least in part.

    In medicine, we tend to agree what outcomes are important – death, is pretty important and we’ve developed some ways of measuring thing like quality of life and disability adjusted life years. We mostly know to criticise drug companies for choosing outcomes they know they can reach (which of you are interested in you LDL-C at 6 weeks after starting your statin, when you could have one that actually stops you dying as early as you might?)

    The Daily Mail would presumably like an outcome of “whatever would punish those crims the most” inviting the prospect of a fun questionnaire survey (“How would you rate the enjoyment of you time in jail? 1. Hugely enjoyable – would love to come back to 5. Hated it – let me out of here”). Most of us reading this, it seems, would like an outcome of “not going back to prison”, “getting a job” or something.

    I’m not saying that getting at these outcomes is impossible, but that it probably needs a lot of discussion and thinking about.

  25. wilsontown said,

    July 23, 2007 at 1:22 pm

    This was a very interesting piece, especially in the light of the following Grauniad story:

    www.guardian.co.uk/science/story/0,,2130777,00.html

    Now, I have to admit it’s not exactly clear to me what’s happening to the Science and technology committee, but there’s surely not enough evidence in policy making as it is.

  26. jfdbob said,

    July 23, 2007 at 5:42 pm

    There is a salient point made earlier that it can be very difficult to double-blind social policy interventions – the subjects tend to know whether or not they are in the intervention or control arm.

    However there is still a need to do better research. In particular we need to get better at setting out and measuring outcomes from social policies.

    The real ‘problem’ is not just of ‘ da meeja’, but of democracy and the need for public policy decisions to be legitimate and accountable. Criminal justice is a good case, as mentioned above, where there there are conflicting objectives, other than maximising the Quality-Adjusted Life Years of a clinical trial.

    Public policy theorists sometimes try to frame these conflicts using public value theory.

    To turn the tables on medical research, are there lessons that you could learn from social research? For instance, do you take economic and social costs into account when considering the risk-benefit ratio for new interventions? Do you think about what patients and the public value in medicine?

    Anyway, here is the Savings Gateway pilot mentioned at ippr that used random digit dialling to recruit participants. I haven’t read it but the authors Ipsos Mori and Institute for Fiscal Studies have good reputations in the public policy field.

    www.hm-treasury.gov.uk/media/7/0/savings_gateway_evaluation_report.pdf

  27. evidenceanaorak said,

    July 25, 2007 at 12:20 pm

    I like your stuff Ben, but how evidence based is it to contend that Dr Bird’s would be the first UK based social policy trial ? Among those I can think of in the last few years are RCTs of daycare, smoke alarms, sex education. Trials may not be the only fruit, and need to be better planned in both clinical work and social policy, but there are some. People interested in the growth of research evidence-informed policy in education, social welfare and criminal justice might be interested to look up the Campbell Collaboration – sister collaboration to evidence-based medicine’s Cochrane.

  28. evidencekagoul said,

    July 25, 2007 at 3:14 pm

    Hmmm. I’m not totally convinced by Ben’s claim (or “hunch” might be more accurate) that social policy trials are “very small” and “generally” poorly performed – this is based on what, Ben? I don’t have to look at many Cochrane reviews to work out that “medical” trials are also, er, small, and poorly performed. (That’s apart from the ones that are large and poorly performed). Conversely, I don’t have to look very far to find large, well-and-poorly performed trials of social policies. For exmaple, I have just taken a folder off the shelf behind me labelled “Social experiments” into which I stuff intersting papers describing “social” RCTs, in the hope of reading them before they moulder to dust. One (old) paper in that folder is by Bob Boruch whcih describes 300 RCTs carried out in schools, hospitals, prisons and other settings (Boruch et al. Evaluation Quarterly Nov 1978 pp654-697).
    I’ve another more recent report which suggests that there are >10,000 “social” RCTs. Undoubtedly social policy experiments are much much less common than healthcare RCTs. But it’s a bit of a leap to suggest from a quick trawl of the Campbell website that social policy RCTs are all very small and crap. I’m overlooking here the implication (or is it a syllogism?) of Ben’s last comment that “all trials are scientific…therefore all things that are scientific are trials”.

  29. evidencekagoul said,

    July 26, 2007 at 5:37 pm

    Evidence anorak lists three (I think all of these have been reported in the BMJ but may be mistaken). I agree there is undoubtedly a dearth of social RCTs in the UK though, and the ratio of social to healthcare trials in general is remarkably low. e.g., there are > half a million entries on the Cochrane central database, and about 10,000 on the campbell collaboration’s equivalent. But it’s a bit of a stretch from that, to the assumption that social policy trials are generally very small and poorly-performed – particulary as there are a small number of researchers in the UK actually doing such difficult & innovative RCTs (I’m not one of them though)- e.g. SHARE, RIPPLE, ERA, day care, smoke alarms

  30. Ben Goldacre said,

    July 26, 2007 at 5:39 pm

    what three did he give? i hope it doesnt look aggressive to restate, i’d really be interested to see 3 recent large, well conducted RCTs on social policy from the UK?

  31. rosieman said,

    July 27, 2007 at 11:27 am

    Interesting article, there was a special issue of the The annals of the American academy of political and social science in 2003 on just this topic. For three random UK trials try this:

    Using Random Allocation to Evaluate Social Interventions: Three Recent U.K. Examples

    I’ve sent you a PDF.

  32. evidencekagoul said,

    July 27, 2007 at 1:04 pm

    Great, that’s the 3 RCTs sorted. Now, all we need is the actual data to back up the following hyperbolic statement: “…there are very few RCTs, they are generally poorly performed and very small.”…

  33. rosieman said,

    July 27, 2007 at 6:34 pm

    Yep,

    ann.sagepub.com/content/vol589/issue1/

    I’ve gotta admit I study politics and work as a research assistant for an international economist and I’ve never come across a RCT in my day to day studies / work. There’s also a general attitude amongst academics that they’re “impossible” or nearly so in political science, and woe betide the academic or student who drops the word “experiment” into a paper. Political scientists don’t experiment – they compare different policy options.

    As for “evidence based policy making” this usually seems to consist of commissioning a thinktank or academic to do a quick and dirty secondary literature review over the course of a couple of weeks, then releasing it with a misleading press statement if it conforms vaguely to the decision you’ve already taken. If it doesn’t it might surface in some form in a institutes working paper collection.

    Ah Politics!

  34. prometheus said,

    July 28, 2007 at 8:28 pm

    Apologies if any of this has been covered already, I just didn’t have the energy to read all previous commnents. The first response to Ben’s query re:doing RCTs of social policy would have to be ‘what do you mean by social policy?’ Any further responses would depend very much on how social policy was defined. However, the fact is that there have been quite a number of RCTs of social policies in their broadest sense – I am currently conducting a systematic review of such policies. They were very popular in the US in the 60s and 70s, particularly in the field of housing.

    However, their use in this field is limited by ethical and practical concerns (How do you justify allocating an income maximisation intervention to only some of those who need it? How do you double or in many cases even single blind a social policy? It can be difficult to agree what the outcomes of interest are, and then to develop means of measuring these effectively etc etc) In many cases, researchers have agreed to adopt a next best approach, by using before and after designs of a single cohort (ie: just the intervention group.

    Anyway, don’t have access to any relevant links just now, but I know that the Campbell Collaboration have been involed in RCTs in a number of areas of soical policy (crime and education for example).