The least surrogate outcome

April 5th, 2008 by Ben Goldacre in bad science, drurrrgs, evidence based policy, politics, statistics | 12 Comments »

Ben Goldacre
The Guardian,
Saturday April 5 2008

There’s this vague idea – which has been going around for the past few centuries – that statistics is quite difficult. But in reality the maths is often the least of your problems: the tricky bit comes way before the number crunching, when you are deciding what to measure, how to measure it, and what those measurements mean.

The new drugs strategy has been published, with outcomes that will be measured to see if it works or not. However you cut the cake, we should be clear: measuring drug-related death is difficult. You could look at death certificates to see what’s listed, but they’re often filled out by junior doctors, and aren’t very informative or reliable. You need to decide where to draw the causal cut-off. Does HIV count as a drug-related death if you got it from a needle full of heroin? From sex work to fund that needle?

just-say-no.jpgHow about if it kills you 10 years after you become abstinent, or you die from chronic, grumbling hepatitis C from a needle, or chronic, seeping pus-ridden abscesses bulging deep in your groin from years of injecting into your femoral veins?

And that’s before we get to crack-frenzy violence and drug driving. What if there was no toxicology done? What if there was, but they didn’t test for the drug the person took? What if the coroner finds some drugs in the blood, but doesn’t think they were related to the death? Are they consistent in making that call?

The new government drugs strategy solves this tricky problem by simply not measuring drug-related deaths as an outcome any more. It was a key indicator in the last strategy document 10 years ago, but you won’t see death mentioned once in Drugs: Protecting Families and Communities Action Plan 2008-2011.

You won’t see death mentioned in Public Service Agreement Delivery Agreement 25, which includes measured outcomes such as the number of users in treatment and the rate of drug-related offending. A lot of drug users die. Death, even if you don’t like drug users, is important.

But beyond the vicissitudes of how you collect these figures, there is the interpretation and analysis; and the greatest irony is that the government may have dumped drug-related deaths two weeks ago, simply because it misunderstood that the figures might be quite good.

Overall, drug-related deaths show no great improvement over the years. But what if older people over, say, 35 – users from the great injecting epidemic of the 1980s – were dying at a greater rate, while young people, the target of great effort, are dying at a slower rate? That’s what some people reckoned, and that’s what it looks like from an analysis of the figures by the biostatistics unit in Cambridge, who presented their findings as a poster, exactly two weeks ago, at the same time, by some bizarre coincidence, that the government announced its deathless drug strategy.

If I’m not mistaken, the government’s desire to cough over the unflattering death stats may represent an entirely new category of bad science: being too dumb to know when you’ve done well.




I’m not aware of any good work of being too incompetent to assay your own competence. The phenomenon of being too incompetent to assay your own incompetence is discussed in some detail in one of my favourite academic papers of all time:

Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments
Journal of Personality and Social Psychology
1999, Vol. 77, No. 6. ] 121-1134
Justin Kruger and David Dunning
Cornell University


“People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.”

it can be found here:

If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

12 Responses

  1. pzoot said,

    April 5, 2008 at 6:39 am

    Re: being incompetent to assay your own competence: the Kruger and Dunning paper does discuss the finding that the most able also tended to underestimate their ability.

  2. Kess said,

    April 5, 2008 at 8:47 am

    So most normal people tend to underestimate their abilities, but the truly incompetent overestimate theirs.

    That probably explains why the most clueless managers always seem to get promoted at my workplace!

  3. frontierpsychiatrist said,

    April 5, 2008 at 10:32 am

    Before we all tell ourselves that we’d never act in anything other than a rational way, it’s worth mentioning in the fundamental attribution error:

    (nb: beloved of the membership exam of the Royal college of psychiatrists)

    Basically we’ve all a tendency to think that our shortcomings are to do with bad luck (i.e. situational), but everyone else had theirs coming to them (i.e. dispositional).

  4. stever said,

    April 5, 2008 at 10:40 am

    They may lack the sophistication to understand the death stats but the Government do not lack competence in spinning the stats to their advantage. (see for more discussion)

    In last years drug strategy consultation document they claimed: “Drug-related deaths have fallen from 1,538 in 1999 to 1506 in 2005” [1].


    – The Government claims drug-related deaths have fallen very slightly, the consultation document describes a fall of 2% between 1999 and 2005, a statistically insignificant fall of 32.

    – The source for this claim is unclear, however, as the reference given is for a report which looks at drug deaths between 2000 and 2004 [2].

    – It is also noteworthy that the Government has chosen to quote the level of drug-related deaths in 1999 as a benchmark rather than 1998 which has a higher figure but is the benchmark year used for other statistics in the document. The annual drug death figures are very volatile from year to year (indicating problems with the measures, and with using them as a measure of success), allowing them to be easily cherry picked to show success where none is evident.

    – According to the latest Office for National Statistics (ONS) figures [3], drug-related deaths have actually risen by just over 10% from 1,457 in 1998 to 1,608 in 2005.

    N:B the true level of drug-related deaths is likely to be higher than ONS suggests as its figures only include deaths which are directly linked to drug use and occur shortly after use (and which are recorded on death certificates). Its figures do not include deaths caused by longer term use such as respiratory diseases, nor does it include deaths from indirect causes such as infectious diseases, car accidents and violence. [4]

    The United Kingdom has the 2nd highest rate of drug-related deaths in Europe [5].

    1. Drug Strategy consultation document : “Our communities : your say”, 25 th July 2007, Key Facts (Annex A)

    2. Health Statistics Quarterly Spring 2006, Office for National Statistics

    3. Health Statistics Quarterly Spring 2007, Table 3, page 86, Office for National Statistics

    4. For more discussion and data see the Transform Fact research Guide to drug deaths here:

    5. European Monitoring Centre for Drugs and Drug Addiction, 2006 Annual Report.

  5. gimpyblog said,

    April 5, 2008 at 12:03 pm

    stever what do the per capita drugs death figures look like? Surely this would be a better way of determining a rise or fall in the number of deaths due to drugs. What with the recent immigration it might mean that the per capita figure gives a lower value than simply recording the number of deaths.

  6. muscleman said,

    April 5, 2008 at 12:09 pm

    Pzoot the reason I think that the most competent underestimate their abilities is because to get very competent you need a culture of striving all the time to improve. Which requires that you have a vision of perfection that you are trying to reach, and of course you never do. Those who are not comptetent measure themselves against simple competence which is a different thing.

  7. RS said,

    April 5, 2008 at 1:34 pm

    gimpy – difficult to claim a real fall in drug deaths just by increasing the denominator by adding in a likely highly atypical new immigrant population.

  8. Natorum said,

    April 7, 2008 at 9:44 am

    I have a copy of the wiki page for Dunning-Kruger pinned to my desk at work. I’m sure the irony is lost on management.

  9. Jamie Horder said,

    April 7, 2008 at 10:12 am

    shmuck: In cases like this the understanding is that journal papers contain sufficient information that readers can get an accurate idea of the tests used. If you want to get all the details they are taken to be available on request from the authors (or, increasingly nowadays, they are made available as “Additional Material” online). It would take up a lot of space if this kind of thing was included in the main text of every article.

  10. heavens said,

    April 9, 2008 at 1:09 am

    I think that participants in these studies may have had trouble adjusting their perception of the group to match the actual group.

    If we measured IQ for the people I’m around regularly, most of my friends — my default “peer group” — would probably score more than one standard deviation above the national median. So when I told myself in school that I was perfectly normal, I meant that I was “perfectly normal, compared to the top 3% of students in the nation,” not “perfectly normal compared to the average student in the entire country.”

    In the absence of substantial information about the other people in a study, I would likely tend to underestimate my performance: I happen to do well with logic puzzles and grammar, but surely there will be several people in the room who will do better, because I’m “perfectly normal.”

    I assume that the same logic works for the bottom quartile: they are “perfectly normal” — compared to their internalized peer group, which is almost certainly other people who get lousy marks.

  11. heavens said,

    April 9, 2008 at 2:15 am

    I also want to know: Do the less competent people in these studies tend to think that “success” (as measured, say, by grades in school) is a matter of luck?

    If you’re really bad at predicting when you’ve done well, then “good scores seem to be random,” or at least outside my control, would eventually seem like a reasonable interpretation of your data.

  12. sideshowjim said,

    April 10, 2008 at 5:44 pm

    I guess that Dunning-Kruger thing explains most of the contestants on the apprentice…