Ben Goldacre, The Guardian, Saturday 31 July 2010
It’s the near misses that really make you want to shoot your own face off. This week the Centre for Policy Studies has published a pamphlet on education which has been covered by the Mirror, the Mail, the BBC, the Telegraph, the Express, the Guardian, and more. Boris Johnson endorses it.
The report examines why one third of children have reading difficulties at the age of 11, and concludes it is because of a lack of discipline, and the absence of a teaching system called “synthetic phonics”. The report contains lots of anecdotes, but barely mentions the evidence.
In 2006 the government published a systematic review and meta-analysis of all the trials ever to look at phonics, which you can read in full online. Skip the rest of this boring paragraph. There were 14 trials in total looking at reading accuracy as their outcome, and collectively they found some evidence that phonics are a little better. Then there were 4 trials looking at comprehension, which found only weak evidence of benefit. Finally there were 3 trials on spelling, which collectively found no benefit for phonics. All of these trials were tiny, and when I say tiny, I mean they had between 12 and 121 children, mostly at the lower end of that range. Only one trial was from the UK.
Many teachers feel the evidence is not compelling, and don’t like phonics. To be fair, there really isn’t enough evidence to say phonics definitely works. The pamphlet recognises this. So how do we move forward? Should we run a large, well-conducted randomised trial?
No. The think tank have it all worked out, and so does Boris. Their innovative solution is taken seriously by every newspaper in the country. “It is time to end this culture war,” says Boris in the Telegraph: “to try to settle once and for all, in the minds of the teachers, whether synthetic phonics is the complete answer or not… It is surely time for the Government to organise a competition, a shoot-out between the two methods, to see which is the most effective for children of all abilities.” Both expand on this idea. Read for yourself. They don’t mean a trial. They really do want a competition.
By now you don’t need me to tell you how dumb this suggestion is, but in case anyone in power is reading: there is no room for debate here, a “competition” between schools who’ve chosen one or other method is definitely and unambiguously flawed by design. We run randomised trials, where the schools are randomly assigned to one method of teaching or another, for one very simple reason: we want to make sure that the two groups of schools – the ones doing the phonics, and the ones using the other methods – are as similar as possible for all other factors.
If we don’t randomise, “using phonics” might not be the only difference between the two groups of schools. Maybe the schools using the strict phonics systems tend also to be run – and attended – by hardworking disciplined nerds like me: if this is the case, then those schools might do better on literacy tests because of the nerdiness, rather than because of the phonics.
Why have large, robust, randomised trials not already been done? Because people like Boris don’t demand them; because teachers often believe – as doctors once did – that their expertise and intuition make such tests irrelevant and undesirable; and lastly, because many academics in the field of education inexplicably resist them.
This is a relatively new tragedy. In education, as in medicine, there is potential to do enormous good, but also incalculable enduring harm through failure: and recognising that, some of the earliest examples of randomised trials are from education. Many of these predate the 1948 MRC trial of streptomycin which is widely (and incorrectly) regarded as the first proper randomised trial. In 1928, Remmers took the worst 200 students of one freshman year and randomised them to receive either remedial teaching, or teaching as usual, and measured the difference in outcomes at the end of their course. In 1931 Walters did a randomised trial to see if counselling improves student performance. In 1933 Remmers was at it again, running a randomised trial to see if having exams at the end of the first term on a course improved a pupil’s outcome in final exams. Education researchers helped to pioneer randomised trials, a lifetime ago, but then abandoned them.
We expend a vast amount of money and effort on assessing children, without much evidence that this does them any good at all; but we make no attempt to cheaply and systematically assess the teaching profession’s various education methods, despite knowing for an absolute fact that this would bring incalculable benefits for every generation to follow. Instead we have Boris and some think tank wittering on about a “competition”: and everyone takes them seriously.
nezumi said,
August 2, 2010 at 12:42 pm
Just noticed I made an apostrophe mistake. Should take my own advice and proofread before posting 😛
maizie said,
August 2, 2010 at 2:19 pm
I’ve ‘borrowed’ this from someone else’s post on this topic on a completely different forum:
“The Rose Report 2006, which advocated systematic synthetic phonics and getting rid of multi-cueing, WAS based on good evidence- and this evidence was then checked by the all-party Science & Technology committee who gave it a clean bill of health:www.publications.parliament.uk/pa/cm200506/cmselect/cmsctech/900/900we24.htm
For an analysis (Prof. McGuinness is trained in statistical analysis) of the Torgersen, Brooks and Hall review:
dyslexics.org.uk/comment.pdf”
The Torgerson et al review is that erroneously described by Ben as “…. a systematic review and meta-analysis of all the trials ever to look at phonics” It actually only looked at a tiny number of all the trials ever done…
Marcus Hill said,
August 2, 2010 at 2:39 pm
@oldandrew: I don’t have time to dig out a lot of references, but I’ll give you the ones I have to hand. If you hit Google Scholar up for recent papers by S Hallam and J Ireson on grouping, you’ll find a number especially on its effect on pupil motivation in primary schools.
Rubin, Beth C.(2006) ‘Tracking and Detracking: Debates, Evidence, and Best Practices for a
Heterogeneous World’, Theory Into Practice, 45: 1, 4 — 14 is a decent (US based) overview of “detracking” (i.e. mixed ability teaching) research (not Maths specific).
There’s a section on alternatives to abity grouping in maths in the literature synthesis by Slavin, Lake and Groff (2008) at mac.wiki.ciu20.org/file/view/mhs_math_Sep_8_2008.pdf – actually, the review is relevant to this discussion as a whole (as I realised on skimming it after over a year since I last looked at it!) as it has a collection of studies in maths education with some fairly stringent quality criteria for inclusion. One of these is that the studies had to be contolled and either randomised or matched, and there is a bit of analysis at the end of the paper showing that matching seems to give results as good as randomising (end of p40), so “randomised” seems to be a lot less important than “controlled” in education research.
oldandrew said,
August 2, 2010 at 3:44 pm
Marcus Hill,
None of your references above fit the description you gave. Just to remind you, you suggested, in a comment where you appeared to favour quantitative research that “the research evidence tends to indicate that mixed ability teaching in Maths is significantly better for the lower ability pupils and is no worse than teaching in sets for higher ability pupils”.
Do you actually know of quantitative research which shows this?
raven said,
August 2, 2010 at 3:55 pm
@ Marcus – does the evidence you mention show mixed ability teaching for maths is more effective at all ages, or just primary?
I don’t know if our experience a typical one for the UK – from Yr1 onwards work in groups of 5 or 6 in Maths & English is in ability groups and the work is ‘differentiated’ to match ability?
I have wondered if perhaps using that system throughout primary is actually limiting progress of some of those in the low ability groups.
David Perkins said,
August 2, 2010 at 7:07 pm
@oldandrew It is true that it can’t be properly blinded and that’s a major factor, including feedback effects from parents and teachers during the study, but there are also issues of perceived agency. People who consent to being part of a trial are more likely to accept the consequences of that decision than someone who doesn’t feel they were given a choice. Also, people in medical trails are often not connected beyond them, but students and parents within classes, schools, even districts and beyond tend to be highly connected so their organizing potential is high and their potential sympathetic audience (parents and people who plan to one day be parents) is enormous.
Happy Camper said,
August 2, 2010 at 9:55 pm
Project ‘Follow Through’ was the biggest, longest, and most expensive study of instructional methods in education. It lasted several decades, included thousands of students across many schools, and cost 1 billion dollars. Of all 20 or so methods looked at, there was one clear winner: Direct Instruction. This has not been taken up in the US for political reasons, and no-one here in the UK is in any hurry to either.
The evidence is in. It is time to change over to Direct Instruction, whatever your intuition may tell you.
oldandrew said,
August 2, 2010 at 10:12 pm
“This has not been taken up in the US for political reasons, and no-one here in the UK is in any hurry to either.”
Actually it was government policy (at least in core subjects) under David Blunkett (i.e. 1997-2001). It was abandoned, bit by bit, without debate or explanation, during the long period of drift after Blunkett. It may be policy again soon. Both Michael Gove and Nick Gibb seem to believe in it, although unfortunately they seem more concerned with putting time and money into setting up new types of school. There is also the related issue of how much particular methods should be forced on teachers. Blunkett and his successors were widely seen as overstepping the mark in this respect, and the infra-structure set up to enforce direct instruction methods was also used to remove them.
oldandrew said,
August 2, 2010 at 10:19 pm
Apologies for the rogue hyphen in the word “infrastructure” there.
Marcus Hill said,
August 3, 2010 at 9:47 am
@oldandrew: Try Boaler, Wiliam and Brown (2000) “Students’ Experiences of Ability Grouping – disaffection, polarisation and the construction of failure” British Educational Research Journal, Vol 26 No 5, and also some of the studies cited therein. I’m pretty sure there was a book by Boaler based on the whole study as well, published around 2002 after the study in the paper I cited was completed, but I don’t have the reference to hand. Also, I chose the words “tends to indicate” carefully, as the evidence is far from overwhelming.
@raven: Mixed ability seems to be better for the lower ability groups at all ages, but with older children the negative effect on higher ability pupils may (and this is an impression, not evidence based!) increase. It’s difficult to disentangle the effects of grouping on achievement in purely academic terms from the social effects and the expectation of failure in low ability groups that becomes a self fulfilling prophecy. There is also a tendency to put the worst teachers (often not specialists in schools with a shortage of mathematicians) or strings of supply teachers with the lower sets. It is the norm to have within class ability groups in primary schools for Maths and English, though in larger schools there is a trend to set the children for Maths and sometimes English. You can find a picture of the setting practices of schools in England at eprints.ioe.ac.uk/370/1/Hallam2003Ability69.pdf
oldandrew said,
August 3, 2010 at 1:15 pm
“Boaler, Wiliam and Brown (2000)”
This is qualitative research that doesn’t measure academic achievement.
My question was about quantitative research. Is there any that fits your claim about maths teaching?
oldandrew said,
August 3, 2010 at 1:44 pm
Can I take back what I said about Direct Instruction earlier? I thought Happy Camper meant instruction which was direct, I didn’t realise it referred to a specific educational programme entitled “Direct Instruction”.
raven said,
August 3, 2010 at 3:25 pm
Marcus -thanks for the link on ability groups in primary. Somebody needs to tell the CPS that !
Happy Camper – I read some stuff about Project Follow Through last night. There’s a good article here if anyone is interested.
darkwing.uoregon.edu/%7Eadiep/ft/watkins.htm
The most pertinent thing to this discussion is why the US education establishment ignored the evidence from the study – pertinent because there’s similar factors at work in the UK ‘reading wars’.
MedsVsTherapy said,
August 3, 2010 at 3:25 pm
Interesting discussion. It is kind of ridiculous, though, to argue about which method best teaches children to read. Nearly all of us have learned to read, as do most people, billions, barring significant dire social circumstances.
We commentors here are all on the high end of ability. So, the scientists should study how we learned to read, and copy that. Well, if you interview our parents, or survey our various early-childhood learning contexts, or survey our kindergartens, you will find a general core of features, and then a great range of variability.
So, like walking, talking, or potty-training, for nearly all people, nearly all roads lead to Rome.
Evaluating the group of us high-achievers also will certainly mis-lead, since there might be many of us who received unhelpful methods, yet prevailed due to the amazing forces of natural intellectual development. Some of us were spanked, and some not; what if a huge trial quirks one way or the other with regard to spanking? Would we ban, or institute spanking? No: the “effect” is certainly to be modest, and we know that significant portions of people learn to read either way.
What does this tell us? For the most part, kids can simply be handed over to the sausage factory and run through, and sausage will come out on the other end. No need to worry about the ugly details along the way.
Here is the critical issue: how do we help the modest portion who do not naturally learn to read without much specialized attention and curricula?
For schoolkids in general, it is ridiculous to trial a commercial method versus the vast expanse of dog-eared, stained, spine-broken basic primers. Unless you hold the license and stand to benefit. What benefit might we expect? Kids on average are a half-grade early in reading? A half grade level? Are you kidding me?
Can’t we forgo the cost (license, retraining, and so on), and just be a bit patient with that second half of a grade level?
Here is the issue worth studying: 1. identifying those with noted difficulty learning to read, 2. categorizing the types of problems that constitute the majority of those with difficulty (dyslexia, low IQ, low motivation, poor vision, parents fighting too much at home, attention deficit, and so on), 3. addressing each of these challenges, possibly with case study trials, conparator-group trials, and so on. Someone mentioned Bradford Hill: sure, for each type of learning difficulty, work out various means for developing theories about how the reading process fails to happen as it does for most, and marshal differnt types of evidence to test various theories, try some intervetions to test theories, and maybe at some point carry out a controlled trial on some detail, with a theoretical model as a basis.
Each of those: 1, 2, 3 includes a full research agenda. The only one we might be good at presently is detecting poor/slow/behind-norm readers.
As a psychologist with some training in assessing IQ and learning problems, plus a lot of tutoring experience, I believe I am good at detecting the psychosocial problems – low motivation, anxiety from family conflict, ant-education attitudes in the family, hunger, lack of discpline in the home, and so on.
But I am not as good when it comes to neurocognitive problems such as dyslexia or learning disorders. Each of us experts may be strong in our areas, but delineating reading difficulties into types, reliably, for all kids within a school system, is quite a tall order.
This is no easy matter, and one size does not fit all; “phonics” de rigueur will not solve the problem of all with reading problems – some but not all – and I am pretty sure that the evidence of the billions of readers who never took phonics is proof enough that at best, for the population generally, phonics will deliver at best a slender advantage.
paddyfool said,
August 3, 2010 at 4:25 pm
@oldandrew,
Could you link to some of this overwhelming evidence for phonics?
@Everyone who knows more about education than me,
How hard would it really be to design two encouragement and support packages for literacy etc., one that specifically endorses and supports phonics and a control package that doesn’t mention phonics but is otherwise similar? Naturally, the teachers already have favoured methods anyway; what matters is whether the support structure for schools backing any particular method makes any difference whatsoever. (Quite possibly it doesn’t).
oldandrew said,
August 3, 2010 at 6:26 pm
“Could you link to some of this overwhelming evidence for phonics?”
I have no idea what research is or isn’t freely available on the internet and I don’t intend to search for it. However, one thing I do know is available is this, which describes a meta-analysis:
www.nichd.nih.gov/publications/nrp/upload/ch2-II.pdf
maizie said,
August 4, 2010 at 10:37 am
@MedVTherapy, who said;
“Here is the issue worth studying: 1. identifying those with noted difficulty learning to read, 2. categorizing the types of problems that constitute the majority of those with difficulty (dyslexia, low IQ, low motivation, poor vision, parents fighting too much at home, attention deficit, and so on), 3. addressing each of these challenges, possibly with case study trials, conparator-group trials, and so on. Someone mentioned Bradford Hill: sure, for each type of learning difficulty, work out various means for developing theories about how the reading process fails to happen as it does for most, and marshal differnt types of evidence to test various theories, try some intervetions to test theories, and maybe at some point carry out a controlled trial on some detail, with a theoretical model as a basis.”
Durh, what a good idea, how extraordinary that no-one has thought of this before…. No-one, that is, apart from an army of cognitive pyschologists, neuroscientists, educationalists etc. who have spent the past 4 decades, at least, investigating every conceiveable aspect of reading and reading difficulties.
I find it extraordinary that a vast body of work on the subject of reading is completely bypassed or marginalised by any old ‘man in the street’ who can read and who thus knows that the debate is completely pointless…
While billions of people might be able to read, in major English speaking countries such as the UK, USA, Australia & New Zealand it is understood that up to 25% of the population are semi-literate or illiterate. This represents millions of people who are in many ways disabled by not having this essential skill.
This is one organisation’s list of research papers on reading. It is by NO means exaustive, but is a start if anyone has the slightest interest in the research into reading (as opposed to offering their own uninformed opinion):
www.sedl.org/reading/framework/research.html
raven said,
August 4, 2010 at 11:59 am
I should stress again that most primary schools use phonics in their teaching of reading.
The argument is over how much phonics is needed, is it well enough taught, if using whole word/ sight word methods cause problems for some readers and is phonics the answer to that,and is one particular system of phonics better.
(There are also wider issues about how early we pick up on failing readers, and how targeted and individual is the help they get, do we monitor their progress enough, even how qualified are the people giving the help – but I’m trying not to sidetrack into those !)
maizie said,
August 4, 2010 at 12:37 pm
I like the way that Torgerson et al called for an RCT “to confirm the findings of this review”! Nothing like knowing the answer before you’ve asked the question…
raven, did you read the McGuinness critique of Torgerson et al?
dyslexics.org.uk/comment.pdf
Alan Kellogg said,
August 4, 2010 at 1:05 pm
Worked for me. But then the following conditions applied.
1. I have mild dyslexia.
2. I have Aspergers
3. Mom was trained in phonics and applied her knowledge rigorously.
Result, a small 5 year old boy learned to read using phonics where the whole word method had failed him.
Far as I can see, the study mentioned above is another example of confirmation bias.
Method bias is a subject it does not cover, method bias being a person’s bias towards or against a method of teaching. A teacher biased against phonics will tend to teach it poorly. When the teacher is not properly trained in applying the method results can be especially poor. I have to wonder, of the teachers involved in the study, how many were properly trained in phonics, and how many were anti-phonics to begin with?
A study like this in never about the children alone, their teachers must be considered as well.
Marcus Hill said,
August 4, 2010 at 1:36 pm
““Boaler, Wiliam and Brown (2000)”
This is qualitative research that doesn’t measure academic achievement.
My question was about quantitative research. Is there any that fits your claim about maths teaching?”
You missed out the vital phrase I put in following that – “and some of the studies cited therein”. As I mentioned at the outset, I have a few tangentially related references kicking around and, like you, have no intention of spending time looking for papers I read years ago to present as evidence. The issue of setting in maths is tangential to the topic in any case, I only used it as an example of politicians presenting as a panacea something which is counter to or, at best, unsupported by evidence.
oldandrew said,
August 4, 2010 at 4:53 pm
“You missed out the vital phrase I put in following that – “and some of the studies cited therein”.”
This is because the last time you made a vague claim like that I spent far too much time searching for journal articles only to find ones that said something else entirely. Just to ask again, do you have the details of any quantitative studies that show what you claimed?
“As I mentioned at the outset, I have a few tangentially related references kicking around and, like you, have no intention of spending time looking for papers I read years ago to present as evidence.”
Like me?
If this is a reference to paddyfool’s request for evidence from me about phonics, I declined to search on the internet for links. If he’d asked more generally about how I knew what I was claiming then I could have provided more detail. (Hattie’s recent book surveying meta-analyses covers this question and is the source I have most immediately to hand.)
This is quite different to what you have done. You talked about the importance of quantitative rather than qualitative evidence, then come up with an example of politicians ignoring evidence, only to then repeatedly fail to identify the evidence they are ignoring.
Needless, to say, I have severe doubts about whether the evidence you mentioned exists (because I have looked into this issue in the past). If I am mistaken I would like to know, and read the evidence I have missed. If you are mistaken I would be grateful if you would admit it instead of sending me on a wild goose chase.
So I’ll ask again, do you know of any quantitative studies that show what you claimed?
maizie said,
August 4, 2010 at 5:04 pm
@Alan Kellog
You say:”Far as I can see, the study mentioned above is another example of confirmation bias.”
There have been several studies mentioned on here. Which particular one are you talking about? Can you name it please?
Alan Kellogg said,
August 4, 2010 at 6:34 pm
The very first one, mentioned in the original post.
raven said,
August 4, 2010 at 7:53 pm
Hi maizie -in reply to #70.
I read that ‘confirm our findings’ as saying their findings are pretty tentative & need confirming. Plus as Ben said in the article the studies they looked at are small and only one was in the UK.
Yes I have read that opinion piece. She makes a fair point about studies on normal class teaching versus ones on interventions for poor readers. In that perhaps we out to look at those separately, as it seems reasonable different things might work in those two situations. And that the meta analysis could do with a tighter definition of what ‘systematic’ phonics means.
I’m off to look up West Dumbartonshire – I read what they put in place wasn’t just phonics, but a whole system to support literacy & that sounds interesting.
maizie said,
August 4, 2010 at 8:20 pm
@Alan Kellog
I still don’t know which ‘study’ you are talking about. Are you talking about Miriam Gross’s ‘report’, which isn’t a ‘study’ at all in the scientific sense, merely and article about the teaching of reading, or are you talikng about the so called ‘meta-analysis’, which isn’t a ‘study’ either?
What on earth does ‘confirmation bias’ (in its statistical sense) have to do with either of them?
nancy brownlee said,
August 4, 2010 at 9:47 pm
Hey, umm, MedVSTherapy- here in the States, the vast majority of people under 40 can’t read. No, really. They can puzzle out a sentence, given a little time, but they can’t really read. They can’t read for information, can’t read a news story and tell you what it said- can’t look up a number in the phone book or a word in the dictionary without help. They’ve been through the see’n’say sausage factories, and they can’t read. I worked for a community college for 7 recent years, and in processing thousands of financial aid applications, I had very few that didn’t need decoding by the applicants. I had 6 on which the kid had misspelled his (or her) name. They can’t read. They all got in to college…
Alan Kellogg said,
August 5, 2010 at 2:49 am
What part of “The report examines why one third of children have reading difficulties at the age of 11, and concludes it is because of a lack of discipline, and the absence of a teaching system called “synthetic phonics”. The report contains lots of anecdotes, but barely mentions the evidence. ” don’t you understand? Or are you just being mulish?
Alan Kellogg said,
August 5, 2010 at 2:51 am
Confirmation Bias: Looking for what confirms your beliefs in preference to contrary data.
maizie said,
August 5, 2010 at 10:20 am
@Alan Kellog
Oh, sorry. I thought you were using the term more ‘technically’, as of as a critique of a scientific study.
As a matter of interest, what bl**dy ‘contradictory data’ is there in favour of not teaching synthetic phonics (quantative, not qualitative, please)? o
I can’t say whether or not Miriam Gross had any preferences one way or the other before she started researching her report. She’s a journalist, not an educational theorist. I doubt you can, either, unless you happen to know her personally or professionally and know that she is a died in the wool synthetic phonics adherent. Accusations of ‘confirmation bias’ would only be valid if you knew for certain that she was deliberately looking for an endorsement of phonics teaching.
There are parts of her ‘report’, such as the comments on streaming etc. which are completely irrelevant to the main premis; which is that schools should be better implementing the official guidance on the initial teaching of reading.
You could, if you like, try to tie ‘phonics’ to ‘right wing’but in doing so you would just be repeating a mindless and damaging myth. The education of children should not be ‘political’and it certainly is not for those who are actually delivering it. Academics, theorists and loonies can attach whatever political connotations they like to various pedagogies but those of us who actually work with children are only ‘political’ in that we want all children to achieve to their potential and to ‘do’ in life, rather than be done to. I know SP practitioners of of all political colours.
Marcus Hill said,
August 5, 2010 at 11:38 am
@oldandrew: I really didn’t want to get drawn into this, and have alredy spent way too much time on justifying a passing comment, but…
Two of the references in Boaler et.al. (2000):
Boaler, J (1997) “Setting, Social Class and Survival of the Quickest”, British Educational Research Journal, Vol. 23, No. 5 (Dec., 1997), pp. 575-595 has a quantitative analysis comparing the results of two similar schools, one of which teaches in sets and one which does not, showing significant advantages in terms of improvements in attainment for mixed ability teaching.
Boaler, J (1997) “When even the winners are losers: evaluating the experiences of `top set’ students” Journal of Curriculum Studies, vol. 29, no. 2, 165–182 includes a little data showing that long term recall of learning by students in top sets is worse than that of students of similar ability in mixed groups.
I also happened to remember another paper due to its distinctive title:
Linchevski, L and Kutscher, B (1998) “Tell Me with Whom You’re Learning, and I’ll Tell You How Much You’ve Learned: Mixed-Ability versus Same-Ability Grouping in Mathematics” Journal for Research in Mathematics Education, Vol. 29, No. 5 (Nov., 1998), pp. 533-554 contains a study in which pupils in an Israeli school were randomly allocated to groups being taught in sets and in mixed ability classes. The results after two years were analysed and showed the lower ability pupils performed significantly better in the mixed ability classes and that the higher ability pupils performed slighly worse, but not to a degree of statistical significance.
Since you claim to have looked into it, I’d be genuinely interested to see any studies which demonstrate anything contrary to the impression I’ve formed of what the research tends to suggest – that is, that ability grouping significantly disadvantages those in lower ability groups and slightly but less markedly advantages those in higher groups in terms of their academic progress; and that it has marked and generally (though not universally) negative impacts in social and affect terms.
Incidentally, looking back on my original comment, I never claimed the research evidence in relation to setting was largely quantitative – you conflated my earlier comments on a preference for quantitative research with my final comments on politicians ignoring research in general, this was not my intention, and I apologise if I wasn’t sufficiently clear on this. As with most education research, this subfield is composed of a surfeit of qualitative or small scale quantitative studies with only an occasional larger quantitative study thrown in.
oldandrew said,
August 5, 2010 at 2:34 pm
“Boaler, J (1997) “Setting, Social Class and Survival of the Quickest””
This is largely qualitative, and what hard data there is doesn’t fit your description.
“Boaler, J (1997) “When even the winners are losers: evaluating the experiences of `top set’ students””
Again, this is qualitative research.
“Linchevski, L and Kutscher, B (1998) “Tell Me with Whom You’re Learning, and I’ll Tell You How Much You’ve Learned: Mixed-Ability versus Same-Ability Grouping in Mathematics””
Well I suppose this is quantitative, but done to such an appallingly low standard as to be utterly worthless.
Can I take it then that your entire statement on what the evidence shows refers only to qualitative research, and this one, dire, quantitative study?
“As with most education research, this subfield is composed of a surfeit of qualitative or small scale quantitative studies with only an occasional larger quantitative study thrown in.”
Do you know of any larger, quantitative studies in this subfield?
Alan Kellogg said,
August 5, 2010 at 4:05 pm
Maizie,
I refuse to answer because you talk mean to me.
Alan Kellogg said,
August 5, 2010 at 4:06 pm
Maizie also jumps to conclusions with no supporting evidence, but that’s another story.
msjhaffey said,
August 5, 2010 at 4:51 pm
Ben
You write (quoting Boris) ‘ “It is surely time for the Government to organise a competition, a shoot-out between the two methods, to see which is the most effective for children of all abilities.” Both expand on this idea.’
But no, he doesn’t expand on it. You’re reading w-a-y too much into his words “shootout” and “competition”. I am quite sure that if you spoke to him and explained the advantage of a rigorous scientific test, he’d endorse it. What he wants is for children to learn to read better and while he might huff and puff at the rigour, I have no doubt he’d support it.
Ben Goldacre said,
August 5, 2010 at 5:03 pm
@msjhaffey
You’re wrong. It is very clear indeed, looking at the full quotes from Boris, and the report whose suggestion he is explicitly endorsing, that both are talking about a competition, and not a meaningful trial.
Boris in the Telegraph:
www.telegraph.co.uk/education/7897687/Illiteracy-is-bad-for-us-so-why-dont-we-do-something-about-it.html
Are they right? It is time to end this culture war, and to try to settle once and for all, in the minds of the teachers, whether synthetic phonics is the complete answer or not. We have in Nick Gibb, the admirable new schools minister, one of the world’s great militants for synthetic phonics. Indeed, you can have a meeting with Nick on almost any subject, and I can guarantee he will have mentioned it within five minutes. I am almost 100 per cent sure he is right.
And yet I have also met London kids on Reading Recovery programmes who are obviously benefiting hugely from a mixture of phonics and word recognition. It is surely time for the Government to organise a competition, a shoot-out between the two methods, to see which is the most effective for children of all abilities.
And don’t tell me children are averse to competition. Look at me and my sister.
Boris in the intro to the Report:
publications.education.gov.uk/eOrderingDownload/RR711_.pdf
This is a controversy that has been raging for so long, and with
such theological intensity, that it is surely time to resolve it once
and for all. If, as the Centre for Policy Studies suggests, an
annual competition can be devised to discover which schools
are best at teaching children how to read – with adequate
controls – then I would certainly give the venture my full
support.
The report itself:
publications.education.gov.uk/eOrderingDownload/RR711_.pdf
One step towards achieving this might be to initiate an annual
contest among London primary schools – a kind of Booker Prize
for literacy, perhaps sponsored by one of the large corporations
which have been so vehement in complaining about the poor
skills of school leavers. The competing schools would be
independently assessed culminating in three winners and 10
runners-up. Every child and all the relevant teachers in the
winning schools would then be given an award at a large prizegiving
party. The winning schools would get a substantial cash
award to be spent entirely at the head teachers discretion. The
teaching methods of the successful schools – as well as the
conduct and enthusiasm of children – would be analysed so that
teachers and parents alike can see which approach works best.
#############################
Nobody has been misrepresented. These people are very clearly proposing a competition, not a trial, as I explained, and their suggestion is stupid, for the reasons I explained. I see from your link that you’re Sean Haffey, a conservative councillor. I hope you do your best to ensure that there is a good evidence base for policy where possible. If Boris would like to talk about the best way to work out if something works and revise his suggestion then obviously I’d be happy to make the time. In the meantime this was a disappointingly ignorant suggestion about how to develop evidence for policy from a senior politician and a respected think tank.
msjhaffey said,
August 6, 2010 at 9:12 am
Dear Ben,
You’re one of the good guys, as I’m sure most people who follow your blog realise.
However, we also need to realise that people think and work in different ways. A good politician needs to know a little about a lot. A scientist is typically different, knowing a lot about a little. You have the honesty to quote Boris, including a key phrase “– with adequate controls –”. Boris clearly states his belief that phonics is better, but he’s prepared to be convinced and that phrase is as close as most non-scientists will come to understanding scientific method.
I genuinely suggest you challenge him to institute or campaign for a fair test. You have the profile to do so. I may be wrong and he might duck the issue, but that’s not how I understand what he wrote.
On a separate issue, you also wrote “I see from your link that you’re Sean Haffey, a conservative councillor. I hope you do your best to ensure that there is a good evidence base for policy where possible”. I agree completely: too often policy is driven by ideology and not facts. In my limited experience some politicians across the political spectrum get this wrong, and across the political spectrum others get it right. I would love to see the Local Government Association spend some money on educating councillors about how to make policy based on fact. An easy start would be to send every councillor your book. That’s not fawning: I just can’t think os a quicker and easier way (I’m open to suggestions)
Let’s not assume that scientists are immune from mistakes either. The Sense about Science annual lecture a few weeks ago showed how often scientists get things wrong, as you know (you were just a few yards to my left). As I blogged the next day, we need to ensure the scientific method is less vulnerable to manipulation.
I expect on occasion, if you continue to have the grace to accept my commenst, that I may disagree with you. After all, healthy debate is what advances knowledge.
Geoff332 said,
August 6, 2010 at 1:19 pm
There are a lot of reasons why randomised trials don’t make it into the social sciences to the extent that they do in other sciences. Some of these reasons are good, some are not. I can cherry pick a few (for the record, my own background is Organisation Theory – which tends to draw on multiple disciplines).
One reason is the complexity involved in isolating the treatment’s effect from everything else. Randomising is meant to do this by converting everything else to noise. When dealing with social sciences, everything else can be too noisy to drown out. I suspect one important factor in this case is the teacher’s commitment to each approach, which will have a major impact on that approach’s efficacy. If, as you suggest, most teachers doubt that the phonetic approach will work, then the results will be strongly biased against that and randomisation won’t remove that bias.
A second issue is reflexivity – the act of studying behaviour tends to modify it (and often quite radically). This is tied into all sorts of very human things like self-awareness.
Both of these issues are compounded by the fact that different people respond very differently to different treatments. Some students might learn better from phonetics, while others learn better from more traditional approaches (I have no idea on the evidence around this – but it’s a reasonable hypothesis). However, if the differences are of different magnitude, then the overall impact can be confusing. If phonetics is better at getting more people to read at a basic level, but more traditional approaches lay foundations that improve the long-term ability to read and comprehend at a higher level, then which approach is actually better? In short: neither approach may actually be better in any universal sense.
Finally, it’s often very, very difficult to quantify outcomes. I suggested a difference between basic literacy and highly skilled readers/writers? The former is relatively easy to measure; the latter is both much harder and occurs over a longer time-scale. The drive to quantify tends to reduce any complex phenomena to those that are easy to measure. In some cases, this is probably a very good thing. In others, it is a very bad thing. I suspect that the measurable impact of Shakespeare or Mozart in their lifetimes was negligible. Their long-term impact is somewhat larger.
That’s a quick (and very selective) survey of some of the reasons why strict scientific methodology isn’t applied in social sciences. Of course, many of these issues apply in all sciences, including medicine (particularly if you get into public health or placebo type effects).
Of course I agree with you that a competition is a very bad idea and a randomised trial would be vastly superior. But I’m not sure it would actually answer the question of which approach is better.
dschofi said,
August 6, 2010 at 4:44 pm
As some entering teaching as a second career, I was amazed at the amount of guff talked and the lack of an evidence-based approach to education. For an area as important as education it is unbelievable to have teacher-training courses spouting nonsense about learning styles (e.g. VAK learners) and reading-list texts containing absolute rubbish (e.g. one book I flicked through referenced a certain trial in Durham demonstrating the positive effects of Omega-3 oil…). The fact that cognitive scientists have made significant progress in the understanding of thinking, memory and learning in the past 20 years does not appear to impact a significant chunk of the educational establishment. Sadly, education has a long way to go before it reaches the evidence-based approach of medicine and in the meantime we’ll have papers such as that written by Miriam Gros.
But the comments here shows that there is hope and contained some interesting debate (aside from the tu quoque p1ssing contest). And there is material out there using evidence as a basis for teaching teachers about how we learn (e.g. Daniel Willingham’s excellent book ‘Why don’t students like school?’ – not on any of my reading lists but easily the best book I’ve read on the subject). As teachers we have to strive to ensure that this evidence-based approach is championed and ideas such as running a competition between two schools on reading methods are seen by all to be as ridiculous as running a competition between two hospitals on a certain medical treatment.
mrnam said,
August 8, 2010 at 10:29 am
Education and education research would certainly benefit hugely from a more ‘scientific’ approach. Teachers of my vintage grow weary of the latest fad being presented as the only way to success – it is (unfortunately) in the interests of the younger more ambitious teachers to ride the latest hobby horse for all its worth, even when they themselves may not have a complete belief in it (definitely a case of the ’emperor’s new clothes’). The move to treat schools and their workforces more like competitive commercial enterprises that thrive on individual competition between workers undermines the collaborative approach essential to a successful school, and suppresses critical analysis of new initiatives.
There is a tendency to start from an intuitive feeling about what works well, and then work back from there to find the evidence to back it up – this to me seems entirely the wrong approach. Some of the proposed new approaches may well be effective and a common sense way of structuring teaching methods. One example is the way we have had (from about 5 – 6 years ago) an approach called ‘Assessment for Learning’ pushed on us (from a booklet “Working Inside the Black Box” – Paul Black at Institute of Education). To me the actual research behind this seemed very limited and unscientific – however there are certainly SOME parts of it that I would say do seem effective – but in no way ALL of it. At the same time as this was being pushed we were also being told that ‘Brain Gym’ was a great idea. Of course, despite strong misgivings among many staff at the time about some of the content, no-one really challenged it directly. Resistance I suppose tended to be more ‘passive’ – people just didn’t bother with the bits they didn’t believe in, or only trotted out the expected methodology during observed lessons and OFSTED inspections!
Further, given the extremely ‘dubious’ teaching methods (by modern standards) that I (and I am sure many others of you reading this forum) have been subjected to in their youth, it is a wonder that we managed to learn anything at all when at School, but somehow we did!
kim said,
August 16, 2010 at 11:30 am
@Ian: “Every time you make some meaningless change everybody’s enthusiasm is renewed for a while because it’s different. ”
It’s called the Hawthorne effect, and you’re right, it’s why doing research in education is particularly fraught with problems. Every so often you’ll read about some kids who were given Latin lessons or yoga lessons or meditation lessons and there’s a dramatic improvement in their performance or behaviour. Of course there is. If everybody’s suddenly interested in you and making a fuss about you, you’re inevitably going to do better for a while.
It’s very difficult in education to know what’s a fad and what is based on sound research. The big thing in recent years has been learning styles and the idea that students are visual, auditory or kinaesthetic learners. That will probably pass. In the 80s I was a governor at a primary school and was told by the head that they didn’t teach times tables any more because the current thinking was that understanding mathematical concepts was more important than rote learning.
oldandrew said,
August 16, 2010 at 4:16 pm
“The big thing in recent years has been learning styles and the idea that students are visual, auditory or kinaesthetic learners. That will probably pass.”
It’s been getting challenged for quite a while now:
teachingbattleground.wordpress.com/2008/08/27/a-helpful-video-on-learning-styles/
Deasun said,
September 9, 2011 at 3:42 pm
Looks like Boris has convinced the emenent scientist and sociologist Call Me Dave about the value of synthetic phonics. And you voted for them? God help England if this the besy you have!
www.guardian.co.uk/education/2011/sep/09/free-school-opponents-defending-failure-says-cameron
The prime minister said the government would drive up standards by:
• Ending “wrong-headed methods” that have failed pupils and making sure every teacher has the resources to deliver synthetic phonics teaching. “That’s the method that’s proven to work – and that’s how we can eliminate illiteracy in our country,” he said.
Deasun said,
September 9, 2011 at 3:43 pm
Mind you, it looks like my spelling could do with a work-out. Apologies…