Ben Goldacre, The Guardian, Saturday 20 August 2011

What do all these numbers mean? “‘Worrying’ jobless rise needs urgent action – Labour” was the BBC headline. They explained the problem in their own words: “The number of people out of work rose by 38,000 to 2.49 million in the three months to June, official figures show.”

Now, there are dozens of different ways to quantify the jobs market, and I’m not going to summarise them all here. The claimant count and the labour force survey are commonly used, and number of hours worked is informative too: you can fight among yourselves for which is best, and get distracted by party politics to your heart’s content. But in claiming that this figure for the number of people out of work has risen, the BBC is simply wrong.

Here’s why. The “Labour Market” figures come through the Office for National Statistics, and they’ve published the latest numbers in a PDF document. On page 13, top table, 4th row, you will find these figures the BBC are citing. Unemployment aged 16 and above is at 2,494,000, and has risen by 38,000 in a quarter (32,000 in a year). But you will also see some other figures, after the symbol “±”, in a column marked “sampling variability of change”.

Those figures are called 95% confidence intervals, and these are one of the most useful inventions of modern life.

We can’t do a full census of everyone in the population, every time we want some data, because they’re too expensive and time-consuming for monthly data collection. Instead, we take what we hope is a representative sample.

This can fail in two interesting ways. Firstly, you’ll be familiar with the idea that a sample can be *systematically* unrepresentative: if you want to know about the health of the population as a whole, but you survey people in a GP waiting room, then you’re an idiot.

But a sample can also be unrepresentative simply by chance, through something called sampling error. This is not caused by idiocy. Imagine a large bubblegum vending machine, containing thousands of blue and yellow bubblegum balls. You know that exactly 40% of those balls are yellow. When you take a sample of 100 balls, you might get 40 yellow ones, but in fact, as you intuitively know already, sometimes you get 32, sometimes 48, or 37, or 43, or whatever. This is sampling error.

Now, normally, you’re at the other end of the telescope. You take your sample of 100 balls, but you don’t know the true proportion of yellow balls in the jar – you’re trying to estimate that – so you calculate a 95% confidence interval around whatever proportion of yellow you get in your sample of 100 balls, using a formula (in this case, 1.96 times the square root of ((0.6×0.4) ÷ 100)).

What does this mean? Strictly (it still makes my head hurt) this means that if you repeatedly took samples of 100, then on 95% of those attempts, the true proportion in the bubblegum jar would lie somewhere between the upper and lower limits of the 95% confidence intervals of your samples. That’s all we can say.

So, if we look at these employment figures, you can see that the changes reported are clearly not statistically significant: the estimated change over the past quarter is 38,000 but the 95% confidence interval is ±87,000, running from -49,000 to 125,000. That wide range clearly includes zero, no change at all. The annual change is 32,000, but again, that’s ±111,000.

I don’t know what’s happening to the economy: it’s probably not great. But these specific numbers tell us nothing, and there is an equally important problem arising from that, which is frankly more enduring for meaningful political engagement.

We are barraged, every day, with a vast quantity of numerical data, presented with absolute certainty, and fetishistic precision. In reality, many of these numbers amount to nothing more than statistical noise, the gentle static fuzz of random variation and sampling error, making figures drift up and down, following no pattern at all, like the changing roll of a dice. This, I confidently predict, will never change.

## John said,

August 22, 2011 at 1:16 pm

I read the comments on your article in the Guardian. Doesn’t it make you wonder why you bother?

## JasonDick said,

August 22, 2011 at 1:17 pm

Well, what’s really worrying is that jobs are stagnant even in the face of high unemployment rates, as seen here:

research.stlouisfed.org/fred2/series/GBRURAMS?cid=32280

If these numbers are representative, the unemployment rate simply has never recovered from the recession, and that is a horrible, horrible situation for millions of people in the UK.

But yeah, you’re absolutely right that these sources are completely wrong to harp on this minuscule increase in unemployment that is just statistical noise, because the next blip upward in employment that is within the statistical noise will probably be hailed as “green shoots” or whatever the equivalent UK term is. The real travesty is the stagnation that the government in the UK is doing nothing about (we have the same thing going on in the US, basically).

## Mark Attwood said,

August 22, 2011 at 2:27 pm

The problem with this is, as you are well aware, the truth doesn’t make a story. Unfortunately the pure mathematics of it effectivly renders the excercise useless.

Hence what could we do instead? Although this is a wider issue, to quote you ‘information saves lives.’ This is just as true here, perhaps with less immediate impact than it has in medicine. There is no reason – beyond incompetance, short-sightedness and political expediency from all sides (if the data is of questionable worth then it’s easier to agrue about) why this needs to be sample data. In the era of powerful computing and cheap data storage that we live it is easily possible for this to be real data with the only lack of confidence being errors in data input – If government systems were specified and designed in a properly integrated manner (A manner most people unfortunately presume is already the norm.)

The question then would be would the discussion be any more enlightened, which it perhaps doubtful, but at least it would be another loophole, that can be used to polarise and fail to act in equal measure, closed.

## pauldwaite said,

August 22, 2011 at 2:45 pm

@Mark:

“In the era of powerful computing and cheap data storage that we live it is easily possible for this to be real data with the only lack of confidence being errors in data input”

I wouldn’t say it’s easily possible. The cost of CPU power and storage isn’t (as far as I know — anyone from the ONS or with any actual experience please chime in) the limiting factor in having monthly employment statistics that measure everyone in the country. You still need people to report and record employment statuses, then collate the data, deal with duplications, and that’s only once you’ve decided what you’re actually measuring. (I remember one month a couple of years go, both employment and unemployment fell.)

Accurate reporting, on the other hand (in this case “official figures showed no change in unemployment), costs no more than inaccurate reporting. It might not make as much money as accurate reporting, but then staying at home of a Monday night isn’t as lucrative as looting.

## philipbowman said,

August 22, 2011 at 3:30 pm

What’s really amused me over the last week have been the “News Articles” on the state of the Stock Markets:

“Shares are Up!”

“Shares are Down!”

“Shares are up again – no, they’re not…”

## FlipC said,

August 22, 2011 at 4:43 pm

With a rough mortality rate of 0.79% for the UK that means of the original 2.54m 19,000 died before these figures were taken. That means unemployment has increased by 57,000. Except that not only includes those who have lost their jobs, but those who have just turned 16.

Say 700,000 of those born 16 years ago managed to make it this far. Of those a maximum of 57,000 are now unemployed approximately 8% of 16-year-olds.

So yeah unemployment’s up, but is it outstripping our birth rate?

## Chris Neville-Smith said,

August 22, 2011 at 8:07 pm

Oh, are you back? We thought you’d run away from this site and taken refuge in guardian.co.uk. Did I say something wrong?

## twaza said,

August 22, 2011 at 8:13 pm

Ben

This is the best explanation of the point of 95% CIs that I have read.

Unfortunately it won’t help journalists draft an attention-grabbing headline or create a storyline they think will be followed.

Journalists need to be taught, not just that the truth is more important than spin, but that the truth is actually more interesting than fiction.

## Richard J Wilkinson said,

August 22, 2011 at 8:23 pm

Stats? I can do that, gissa job eh? – Yosser Hughes

## Chris Neville-Smith said,

August 22, 2011 at 8:33 pm

On a far more important note that silly little concerns about the economy or unemployment or misrepresentation of public information:

How come the comments section has a heading of “7 responses” when there are 8?

## Pensfold said,

August 22, 2011 at 8:37 pm

I suspect that by taking a series of surveys month by month, there is a reduction in the statistical error of the series and even of the latest reading, provided it is within a similar range to the series.

However, I don’t know the maths that could be applied to determine this.

## Jammydodger said,

August 22, 2011 at 10:11 pm

Its a little bit more complicated than that!

(sorry couldn’t resist..)

Couple of points:

1. What is the chance that given this Confidence interval unemployment has actually fallen rather than increased?

With a few assumptions this is easy to calculate.

The 95% CI limit corresponds to 2 standard deviations (sd) from the mean. Therefore we can take an s.d. of 40500 and put this around our +38000 mean estimate.

Then we can use the known (but a little esoteric) equations that describe the normal distribution to calculate what the probability that a true value of 0 represent proof. It is just that alpha 0.05 corresponding to 95% probability is accepted by custom and practice by most scientists to be sufficiently robust evidence.

This statement i.e. based on this single sample set, there is an 82.4% chance unemployment is increasing and a 17.5% chance it is falling in a vacuum is a pretty useless statement.

But the data does not exist in a vacuum. There are other pieces of data that put alongside may lead to a potentially useful economic diagnosis. e.g. Reducing household income, difficulty in obtaining credit for businesses, reductions in government spending.

2. All of the above assumes alot (data is normally distributed, samples are random, independent etc.) But these assumption are mostly pretty reasonable.

However one detail that has stuck in my mind is that the quote C.I. from which I derived my s.d. estimate was derived from the overall unemployed figure, rather than the change in empoloyment status of individuals. This is an important detail.

If we sampled completely different people in Q1 vs Q2, we have 2 independent estimates to contend with. The above applies.

But if we sampled the exact same people twice, and asked them in both quarters whether or not they were employed, then we can have a great deal more confidence in the 38000 increase value.

This is because we would be measuring a real change in a fixed sample set and extrapolating to a population, rather than taking the 2 independent estimates of an unknown value and subtracting one from the other to get a change. In this event the confidence intervals for the actual numbers (un)employed would be as described, but the confidence interval over whether the numbers had increased or decreased would not be the same thing. We would know absolutely whether our sample had increased or decreased, and by how much – we’d have a measured effect. The question would then be whether the effect seen within our subsample (a) was statistically significant and (b) whether the sample that we had physically measured was representative of the change in the population as a whole.

Different Confidence intervals entirely would apply to this scenario over those applying to the numbers unemployed.

So – in those circumstances – we could still know that our unemployment has increased, and that it had increased by about 38,000, but also be unsure of the actual numbers unemployed to within 100,000.

I have no idea which scenario applies in this circumstance.

## Jammydodger said,

August 22, 2011 at 10:17 pm

Bah… copy paste errors: Unable to edit the above

It Should say that 17.4% chance unemployment fell (rise 38000) using Normal Cumulative density function equations.

Apologies for lack of clarity.

## Jammydodger said,

August 22, 2011 at 10:26 pm

What wordpress publishes seems to bear no response to what I am typing!!!! Try again:

Based on sd 40500, 17.4 percent chance unemployment fell.

82.6 percent chance unemployment rose.

50 percent chance it rose by more than 38000.

## cellocgw said,

August 23, 2011 at 9:10 pm

Just out of interest, does your gov’t calculate unemployment the way us west-of-the-ponders do, i.e. based on job seekers/benefits filers? We don’t count anyone who is not actively looking and actively recording his search. The US true unemployment rate is always badly underestimated as a result.

## Chris Neville-Smith said,

August 23, 2011 at 10:21 pm

If I understand things correctly, the figure you are talking about is reported as the “claimant count”, which the Government reports in addition to the unemployment figure. I’m not exactly sure what the government counts as “unemployed”, but the unemployment figure is, I believe, always significantly higher than the claimant count.

## klo433 said,

December 2, 2012 at 2:32 pm

Hello there! I am writing a statistics paper on the article you referenced. Would you mind supplying me with the following information:

n=?

S.d.=?

Test statistic=?

Critical values=?

P value =?

Alpha=? (I assumed its 0.05)

Thank you!

## klo433 said,

December 2, 2012 at 2:56 pm

Sorry, 1 more ^^^^

Population mean=?

## Peter Bell said,

December 23, 2014 at 6:50 pm

Can you tell me why statistical arguments are inadmissible in county courts.