Saturday April 5 2008
There’s this vague idea – which has been going around for the past few centuries – that statistics is quite difficult. But in reality the maths is often the least of your problems: the tricky bit comes way before the number crunching, when you are deciding what to measure, how to measure it, and what those measurements mean.
The new drugs strategy has been published, with outcomes that will be measured to see if it works or not. However you cut the cake, we should be clear: measuring drug-related death is difficult. You could look at death certificates to see what’s listed, but they’re often filled out by junior doctors, and aren’t very informative or reliable. You need to decide where to draw the causal cut-off. Does HIV count as a drug-related death if you got it from a needle full of heroin? From sex work to fund that needle?
How about if it kills you 10 years after you become abstinent, or you die from chronic, grumbling hepatitis C from a needle, or chronic, seeping pus-ridden abscesses bulging deep in your groin from years of injecting into your femoral veins?
And that’s before we get to crack-frenzy violence and drug driving. What if there was no toxicology done? What if there was, but they didn’t test for the drug the person took? What if the coroner finds some drugs in the blood, but doesn’t think they were related to the death? Are they consistent in making that call?
The new government drugs strategy solves this tricky problem by simply not measuring drug-related deaths as an outcome any more. It was a key indicator in the last strategy document 10 years ago, but you won’t see death mentioned once in Drugs: Protecting Families and Communities Action Plan 2008-2011.
You won’t see death mentioned in Public Service Agreement Delivery Agreement 25, which includes measured outcomes such as the number of users in treatment and the rate of drug-related offending. A lot of drug users die. Death, even if you don’t like drug users, is important.
But beyond the vicissitudes of how you collect these figures, there is the interpretation and analysis; and the greatest irony is that the government may have dumped drug-related deaths two weeks ago, simply because it misunderstood that the figures might be quite good.
Overall, drug-related deaths show no great improvement over the years. But what if older people over, say, 35 – users from the great injecting epidemic of the 1980s – were dying at a greater rate, while young people, the target of great effort, are dying at a slower rate? That’s what some people reckoned, and that’s what it looks like from an analysis of the figures by the biostatistics unit in Cambridge, who presented their findings as a poster, exactly two weeks ago, at the same time, by some bizarre coincidence, that the government announced its deathless drug strategy.
If I’m not mistaken, the government’s desire to cough over the unflattering death stats may represent an entirely new category of bad science: being too dumb to know when you’ve done well.
I’m not aware of any good work of being too incompetent to assay your own competence. The phenomenon of being too incompetent to assay your own incompetence is discussed in some detail in one of my favourite academic papers of all time:
Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments
Journal of Personality and Social Psychology
1999, Vol. 77, No. 6. ] 121-1134
Justin Kruger and David Dunning
“People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.”
it can be found here: