Looking Deeper Into My Fishy New Friends

September 16th, 2006 by Ben Goldacre in bad science, equazen, fish oil, media, nutritionists, references, regulating research, statistics | 79 Comments »

Ben Goldacre
Saturday September 16, 2006
The Guardian

Regular readers will have established by now that most journalists are so scientifically inept, and so eager to run with “pill solves complex social problem” stories, that companies like Equazen selling their Eye-Q fish oil tablets for children with blanket media coverage can come out very nicely indeed.

So here’s the background you might have missed. Firstly, it costs 80p a day for you to feed your child these Eye-Q omega-3 fish oil tablets that Equazen have provided for Durham Council, to give to their GCSE students, in last week’s fish oil “trial” which received such phenomenal widespread media adulation. Meanwhile, Durham Council spend 65p a day on the ingredients for the same children’s school meals. If I was going to get to work on improving childrens’ diets, Durham, let me tell you, I would not start with pills, nor would I start with the long promotional press releases that you have been sending out with Equazen’s press office contact details on the bottom.

Dave Ford from Durham Council, the man behind these fish oil “trials”, thinks that the evidence on omega-3 for intelligence and behaviour is in, and that this evidence shows it is of benefit. Well now, let’s go through the five published papers shall we: skip to the end for a punchline if you find evidence boring.

Not a single one of these trials is in “normal” mainstream children, 3 were positive (to a greater or lesser extent, as you will see), two were negative. Voigt et al in 2001 did a double-blind, randomised, placebo-controlled trial on omega-3 fish oil in 63 kids with ADHD: they found no significant differences between the fish oil and control groups (full citations and precis for all studies on badscience.net). Richardson et al in 2002 did a trial on 41 children with learning difficulties, and found improvements in 3 out of the 14 things measured (Conners ADHD score, inattention, and psychosomatic symptoms). Stevens et al in 2003 did a pilot study on 50 children with inattention, hyperactivity, and other disruptive behaviors (one third dropped out during the trial) and found improvements in the fish oil group for 2 out of the 16 things measured (parent-rated conduct problems and teacher-rated attention symptoms).

We’re nearly there. This is important. Hirayama et al had a trial with 40 subjects with ADHD, and in fact, not only was there no improvement for the fish oil group, the placebo group showed a significant improvement in visual short term memory and continuous performance. And lastly Richardson et al, looking at 117 subjects with developmental coordination disorder, found no significant differences between placebo and fish oil groups for motor skills, but improvements in reading and spelling. This last one, incidentally, was the “Oxford-Durham” trial, performed by Oxford academics, published in a peer reviewed academic journal, and it has nothing whatsoever to do with these unpublished methodologically inept “Durham trials” now being performed by Equazen’s friends in Durham County Council.

Why does Dave Ford of Durham Council think the case is proven? Why does Equazen think the case is proven? Why does the entirety of the media think the case is proven?

Because Equazen keep going on about all these trials they’ve done, and they keep getting reported all over the newspaper and telly. “All of our research, both published and unpublished, shows that the Eye Q formula can really help enhance achievement in the classroom” says Adam Kelliher, director of Equazen. All of it? I tried to get all of these studies (there are 20 now, apparently) so I could read them. “Nullius in verba” is the Royal Society’s motto: “on the word of no-one”. This isn’t childishness. I’m not accusing people of lying. I simply want to read the research, in full, see their methodology, and results, and statistics, in full, critically appraise it, like you do, when you read a piece of academic research. That’s what science is all about. It’s about not taking things on faith or authority.

But I couldn’t read them. These studies are not published, and Equazen told me I would have to sign a confidentiality agreement to see them: a confidentiality agreement, to review the research evidence for these incredibly widely reported claims, in the media and by Durham Council employees, about a very interesting and controversial area of nutrition and behaviour, and about experiments conducted – forgive me if I’m getting sentimental here – on our schoolchildren. Well I suppose I could have signed it just to find out for my own curiosity. You wouldn’t even know if I had.

Academic References

These are the five published trials looking at what happens in children (with various diagnoses) when you give them fish oil supplements. I do not wish to undermine these studies in any sense, but it is worth noting, along with your other readings around them, that in most only a small number of the many variables measured were changed by fish oil, and that the p-values in the variables that were found to be changed were only just below 0.05, that is, they did just reach statistical significance. If you disagree with any of these brief summaries or have anything to add to them then do please let me know. In general, you will see if you get the original papers that they were methodologically meticulous and reported to a high standard. Top Jadad scores all round.

Voigt, R.G. et al., A randomized, double-blind, placebo-controlled trial of docosahexaenoic acid supplementation in children with attention- deficit/hyperactivity disorder. Journal of Pediatrics, 2001. 139(2): p. 189-96.

Kids with ADHD, found no significant differences in objective or subjective ADHD measures between treatment and control group. 63 subjects, 14.3% dropped out.

Richardson, A.J. et al., A randomized double-blind, placebo-controlled study of the effects of supplementation with highly unsaturated fatty acids on ADHD- related symptoms in children with specific learning disabilities. Progress in Neuro-Psychopharmacology & Biological Psychiatry, 2002. 26(2): p. 233-239.

Kids with LD, improvements in Conners ADHD score, inattention, and psychosomatic symptoms (P = 0.05, 0.03, 0.05 respectively) (3 out of the 14 things measured). 41 subjects, 22% dropped out.

Stevens, L.Z. et al. EFA supplementation in children with inattention, hyperactivity, and other disruptive behaviors. Lipids, 2003. 38(10): p. 1007-21.

Kids with ADHD or other disruptive behaviours, only a pilot study, improvements in fish oil group for parent rated conduct problems (p=0.05) & attention teacher rated symptoms (P=0.03) (2 out of the 16 things measured). 50 subjects, 34% dropped out.

Hirayama, S. et al., Effect of docosahexaenoic acid-containing food administration on symptoms of attention-deficit/hyperactivity disorder – a placebo-controlled double-blind study. European Journal of Clinical Nutrition, 2004. 58(3): p. 467-73.

Kids with ADHD, no difference between placebo and fish oil group (oh, except the placebo group, rather than the fish oil group, showed a significant improvement in visual short term memory and continuous performance). 40 subjects.

Richardson, A.J. and Montgomery, P., The Oxford-Durham study: a randomized, controlled trial of dietary supplementation with fatty acids in children with developmental coordination disorder. Pediatrics, 2005. 115(5): p. 1360-6.

Kids with Developmental Coordination Disorder, no significant differences between placebo and fish oil groups for motor skills, but improvements for the fish oil group in reading and spelling (P= 0.04 and <0.01) and CTRS-L global scale (P<0.05) and some subscale improvements (P<0.05) for the fish oil group. 117 subjects, 6% dropped out.

Incidentally, for those of you haven’t been back to the blog in the past few days, if you have the stomach for very long posts, you might enjoy this slightly odd episode from the previous comments:

www.badscience.net/?p=297#comment-7714

followed by amusing banter, and then stirring rebuttal here:

www.badscience.net/?p=297#comment-7770


++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

79 Responses



  1. prescience said,

    September 16, 2006 at 9:00 am

    In your paragraph, beginning “Dave Ford from Durham Council, the man behind these fish oil “trials”, thinks …” you should have written the word “thinks” in quotation marks.

  2. crichmond said,

    September 16, 2006 at 9:09 am

    Three years ago I published a critical obituary of David Horrobin, the evening primrose oil billionaire, in the British Medical Journal. There followed an orchestrated campaign against me. One of the most active complainers was Adam Kelliher, who described himself as Horrobin’s distressed son-in-law. He said he was an ex-journalist with no competing interests and failed (or forgot) to mention that he was the managing director of Equazen.

    What if there are adverse effects of fish oil (or indeed snake oil) supplements? I recall having seen a claim (it was evidence that fell well short of proof) that it increases the incidence of cancers.

    It certainly increases the incidence of unpleasant fish burps.

  3. doctormonkey said,

    September 16, 2006 at 10:00 am

    At the risk of repeating a previous comment on a previous article:

    What about heavy metal poisoning and other safety issues with fish oils?

    Where is the safety data?

    Are these capsules medicine, supplements or food? Who regulates them, their safety and the claims made about them?

    Is there a case for the advertising standards agency, given your (Ben’s) findings about the literature and the claims made?

  4. Chris Coldham said,

    September 16, 2006 at 10:22 am

    You would expect 20% of variables tested to show a significant ((P

  5. ffutures said,

    September 16, 2006 at 10:26 am

    An obvious experiment which I expect they haven’t done – comparison of the expensive fish oil with e.g. cod liver oil, at a teeny fraction of the price.

    But I agree spending the money on good food would be a much wiser investment.

  6. wewillfixit said,

    September 16, 2006 at 11:25 am

    “You would expect 20% of variables tested to show a significant ((P ”

    At p=0.05, you would expect 5% or one in twenty to be significant, not one in five (assuming there was no real effect).

  7. Despard said,

    September 16, 2006 at 12:35 pm

    The usual practice to reduce the number false positives is to correct for multiple comparisons, which generally involves reducing your alpha level. The Bonferroni correction (the most conservative) divides alpha by the number of comparisons you’re making, meaning that type 1 errors are much rarer.

  8. RS said,

    September 16, 2006 at 1:26 pm

    In these sorts of studies you tend not to correct for multiple comparisons (it’s an exploratory study it is – honest guvner) but multiple testing is obviously a limitation. I think the Durham-Oxford study used some sort of aggregate score and found that changed, so it is less subject to that criticism.

  9. flip said,

    September 16, 2006 at 3:24 pm

    While I am in full agreement re: the subject of the story, some(not all) of the commenters might want to review their statistics / meaning of p values. Also, dare I call out Dr. Goldacre, but saying – “[…] only just below 0.05, that is, they did just reach statistical significance” also suggests a bit of misunderstanding of the fundimental logic of statistical hypothesis testing. Things either reach significance or they don’t, they don’t ‘just’ reach it or ‘just’ miss it. It’s a common and understandable misapplication of NHST, and the logic underlying it, as advanced by Popper, isn’t always profoundly clear. Still, rather than comparing p-values folks should be concentrating on effect sizes. You may be able to find a statistically significant effect or wearing red clothing on your performance at golf, but if doing it only provides an improvement of 0.0002 strokes / 18 holes you might want to invest more time at the driving range. Similarly, if 65p of fish oil/day increases my performance by, say, 0.0004%, perhaps 3 more seconds of homework a day might be more cost/time effective.

  10. Dr Aust said,

    September 16, 2006 at 5:53 pm

    Obviously effect sizes are important, flip.

    Don’t agree about p values though.

    p = 0.05 is an ARBITRARY cut-off, set by us, no matter how common the use of this particular significance level. In effect, we all agreed to use “less than one chance in 20 that this is a random event” as a working cut-off for “so this effect is probably real”.

    Consequently, p just less than 0.05 in effect means “less than one chance in 21 that this was a random result” while p just greater than 21 means “less than one chance in 19 that this was random result”…either side of the cut-off. Agreed, one is formally “statistically significant at the 5% level”, and one is not, but they are actually rather close in terms of the likelihood that they show a real effect.

    By comparison, p just less than 0.01 would mean “less than one in a hundred chance that this result was coincidence”, much more convincing.

    Anyway, while p values of 0.048 and 0.008 are both “statistically significant” , and p = 0.052 isn’t… most working scientists would view p = 0.008 as saying “moderately convincing that this is a real effect of treatment” and the other two as being a bit equivocal, regardless of one being just statistically significant and the other not. After all, if I changed my arbitrary cut-off to “one in eighteen chance of it being a random result” rather than “1 in 20″ (p = 0

  11. Dr Aust said,

    September 16, 2006 at 5:59 pm

    Oops – hit the button too soon. last should read:

    “…if I changed my arbitrary cut-off to “one in eighteen chance of it being a random result” rather than “1 in 20″ (p = 0.05) then both p = 0.048 and p = 0.052 would be regarded as significant.”

    This is an important point in scientific research generally, since it is usually much easier to publish POSITIVE data (“We found an effect”) than NEGATIVE data (“we didn’t find an effect”). But when the differences in the stats between “positive” and “negative” are so small…

  12. RS said,

    September 16, 2006 at 6:02 pm

    I have to agree with Dr Aust, thinking of p-values simply as categorical (significant/not significant) is a rule of thumb, the actual p-value itself carries a lot more meaning. But I see where flip is coming from in terms of null hypothesis rejection or acceptance, it’s just that actual scientific practice is more sophisticated than a simple Popperian approach might suggest (cf. meta-analysis).

  13. RS said,

    September 16, 2006 at 6:05 pm

    Dr Aust, conversely, if you have an interesting negative finding, there can sometimes be too little consideration of power, confidence intervals, and sample size when evaluating just how negative that study really is.

  14. flip said,

    September 16, 2006 at 6:51 pm

    Of course there is information in the p-value. But my point is precisely in the arbitraryness of alpha, therefore, claiming ‘nearly significant’ or other hedges doesn’t quite buy you anything. Seeing a p-value outside the context of the effect size, power/beta really isn’t that informative.
    Have a look at Cohen’s (1994) excellent The Earth is Round (p for some interesting discussions around NHST and its faults.

  15. flip said,

    September 16, 2006 at 7:00 pm

    whoops… html interpretation error above. The Earth is Round (‘p less than .05′) is the title.

  16. pv said,

    September 16, 2006 at 8:05 pm

    As I wrote in response to a previous article, too many shool authorities are soft targets for what are is essentially snake oil marketing. For example, Brain Gym and Omega 3 fish oil capsules. It’s depressing to think that the level of integrity (and intelligence) of those entrusted with the nation’s future is so woefully inadequate.
    Clearly Dave Ford from Durham Council, on this matter, is as scientifically literate as my dead hamster. Clearly he’s a gullible chap, a sucker for voodoo. Or is he? What does he get out of this escapade?

  17. Dr* T said,

    September 16, 2006 at 10:26 pm

    As this is a public body (i.e. schools) the information of whether ‘Dodgy’ Dave Ford of Durham Council is receiving any bonuses/kickbacks (not that I’m suggesting he is of course, I think he’s just unintelligent rather than a scoundrel) should be available to anyone making a request under the Freedom of Information Act.

    Benjamin?

  18. Ben Goldacre said,

    September 17, 2006 at 12:42 am

    Dr*T: my intuition (which is eerily accurate after almost four years of doing Bad Science) is that this was not about money for the characters at durham council.

    more than that, and forgive the sanctimonious moment, but to me this is an interesting example to me of how the discussions here can really help a boy to firm up what he thinks about something. i’m not for one moment accusing you of having sleazy interests, but to me, whats interesting here is the science. once it’s established that the science has been misrepresented, or poorly handled, then the separate question, “which of the many equally disappointing explanations lies behind that, in this particular circumstance?” is somehow not quite as interesting to me. i mean i guess i probably ought to call and ask anyway, but i just don’t know if avenues like that bear the right kind of fruit.

    for anyone who’s interested, my feeling is that this was a systemic problem, not an individual one, and certainly not one of simple financial corruption. equazen want to make money selling pills, and i dont hold that against them, i welcome it, someone’s got to be a businessman. if they style themselves as noble campaigners, hey, its part of the schtick and the pantomime of sales, we’re all adult enough to cope with that. the media want to sell papers with attractive ideas, and to feel like they are experts and noble campaigners. durham council want to be seen to be doing good, many of them want to actually do good, and perhaps some of them enjoy being important experts in the media. thats fine. all along the way, each person overstates one bit, understates another, and in some respects the individual we choose to blame probably reflects which individual we think we ought to be entitled to demand higher standards from, not which one is actually the most to blame. i prefer to document the weirdness than divine good from evil.

    am i going soft?

  19. bad chemist said,

    September 17, 2006 at 10:08 am

    Dr* T,

    There’s been rumours of corruption in Durham Council for years. The local cobbler run a campaign from his shop until he closed and has also published a book on the subject.

    See:
    www.bushywood.com/durham_city_council.htm

    amazon link

    The links on the first site don’t work as the original “cobblers2thecouncil” website no longer exists

  20. bad chemist said,

    September 17, 2006 at 10:11 am

    Sorry for the double post, must have been a typo in my html the amazon link is here:

    amazon

  21. bad chemist said,

    September 17, 2006 at 10:12 am

    3rd post, shit.

    Giving up on using html for this link, it doesn;t like me

    tinyurl.com/fzjh2

  22. Dr Aust said,

    September 17, 2006 at 12:33 pm

    Yes, there is a touch of “incremental misstatement” about it, Ben.

    The trouble then becomes – how can it be combatted? If no-one ought to be doing a better job, we are stuck with this sort of rubbish.

    Personally I tend to think of the company as “sleazier” overall – their agenda is to sell, overriding all else, though they never ‘fess up to this – but in the terms you set out above I EXPECT this of them, so they actually make me least mad.

    The Durham Edu-crats have understandable motives, see comments in the last thread at

    www.badscience.net/?p=297

    .. but I can’t feeling that SOME educational psychologist there should have had the brains / balls (even both?) to take a stand about what a bunch of fish-shi*te it all is. The Ed Psych-ers do CLAIM to be scientists, after all. In my University the psychologists certainly get incredibly annoyed when anyone suggests psychology is not a hard science.

    Finally, the journalists…. you say in the article

    “most journalists are so scientifically inept, and so eager to run with “pill solves complex social problem” stories….”

    Well, agreed, most of them are, but if the NATIONAL BROADSHEET PAPERS, and the NATIONAL BROADCAST MEDIA, can’t be bothered to have at least one proper science correspondent looking into all this…. Or just as bad, if they HAVE people smart enough to do it, but can’t be arsed… well, Lord Reith would be spinning in his grave. What happened to trying to inform?

    In fact I suspect a combination of the two. A while back the Guardian ran quite a good story on the direct-to-consumer-marketing-via-media of Herceptin by Sarah Boseley, so they CAN do it if they want. But everything she wrote was based on info that had been available months earlier.Conclusion – easier to write lazy crap parroting press releases and quoting spokespersons that do even some basic “digging” – which was what I always thought journalism was supposed to be about.

    Unfortunately these days the media seem to regard “investigative journalism” as too time-consuming and expensive to devote to anything other than (i) celebrities’ underwear or (ii) politicians’ hissy fits. Other than that they seem content to re-quote ad nauseam other sloppy reporting from other media outlets.

    You may gather from the above ranting that the Ed Psych people and the journos annoy me more than the thinly-disguised snake-oil peddlers at Equazen – mainly because I reckon the two former groups should be doing better.

    To paraphrase a well-known saying:

    “The only thing necessary for the triumph of pseudoscience is for people who should know better to turn a blind eye.”

  23. RS said,

    September 17, 2006 at 12:45 pm

    Here’s the Cohen article flip mentions:

    cse.unl.edu/~lorin/courses/cse990-fall06/papers/cohen-94.pdf#search=%22cohen%20earth%20is%20round%20p%22

    Personally I think part of the argument depends on a bit of bayesian sleight of hand.

  24. RS said,

    September 17, 2006 at 12:46 pm

    Oops, that’s got the search terms in too:

    try cse.unl.edu/~lorin/courses/cse990-fall06/papers/cohen-94.pdf

  25. SpamEraser said,

    September 17, 2006 at 1:05 pm

    Bad Chemist,

    Although the original www.cobblers2thecouncil.com website seems to have vanished, it can still be found if you know where to look. Just go to www.archive.org and use their search engine. The archive.org website contains 30 snapshots of cobblerstothecouncil.com, taken between March 2002 and March 2005. It is never safe to believe that anything which is posted on the web can ever be truly removed…

  26. pie said,

    September 17, 2006 at 1:17 pm

    From the Gruniad version of this post:
    “Equazen told me I’d have to sign a confidentiality agreement to review the research evidence for these incredibly widely reported claims, and – forgive me for getting sentimental- on our schoolchildren.”

    EH? I spent at least a minute re-parsing that wondering why the feck you can’t learn to write proper.. before coming here and finding out that the newspaper article is in fact a mangled version of the above ‘proper’ (readable) version.
    The paper version often reads clipped and confusing..

  27. jackpt said,

    September 17, 2006 at 4:30 pm

    I do have a problem with the schtick of companies like Equazen, and to a certain extent companies like Kelloggs, or Nestle, because I feel their advertising campaigns blur the lines between what would traditionally be referred to as public information broadcasting and advertising. There should be a clear distinction because people cannot tell the difference. That’s not to say that there is specific bad intent, but someone within a marketing department has chosen to blur the lines. The problem is that companies that are less dubious will benefit when the public can’t tell the difference. I think we’re heading for something akin to what has already happened in the US, where dubious health product infomercials blend seamlessly into news. Fox News is by far the worst network, in many other areas too, but the other networks aren’t far behind.

  28. Ben Goldacre said,

    September 17, 2006 at 4:34 pm

    yeah, i think to my mind the real problem is that this line is hugely blurred when the promotional material is presented on tv and in the papers as a news story, rather than an advert. this is where pill peddlers have a big advantage over cornflakes manufacturers, because the media are stil doolally into the idea that commercial pill peddlers are somehow righteous noncommercial campaigners doing it for the children. and won’t somebody PLEASE think of the children? etc.

  29. jackpt said,

    September 17, 2006 at 4:48 pm

    I think the problem is also that advertisements have become the primary way in which ‘the man’ delivers information. So you get a Tax Credits advert sandwiched between an advert for pro-biotics and Sunny-D. Or the latest Department of Health advert sandwiched between Seven Seas Omega-3 and the Lynx advert with all of the fit women.

  30. RS said,

    September 17, 2006 at 5:18 pm

    I don’t think it is just marketing schtick that the media swallow uncritically – just look at the dodgy spin put on scientific studies by university press offices. A lot of this comes back to one of the main threads in Ben’s columns, people in the media who don’t know what they’re talking about when it comes to science.

  31. cath having fun said,

    September 17, 2006 at 6:14 pm

    … but its also worth noting that one of the largest growth areas of retail food markets is the functional shots ie add Product X to your daily diet and – irrespective of dietary quality – gain Health Benefit Y. From probiotics, stanol/sterol shots, vegetable and fruit ‘shots’ to the commercially non-viable (but rather tasty) cocoa/chocolate (RICH IN POLYPHENOLS!!) ‘shot’ , a ‘health halo’ attached to a product will generate greater revenue, especially if targeted at the baby boomers recognising that immortality doesn’t exist (except in any bizarre musings of patrick holford or Dr Paul Clayton www.healthdefence.com/his_concept.html).

    So the attractive benefit of fish oil supplements lie in the perceived need to do absolutely NOTHING to change underlying diet. Which if we give the more current ‘researchers’ in this field could mean – if it were ever proved so – that you could enjoy your full mental faculties trapped with a compromised body – be it from a haemorrhagic stroke, or ca oesophagus from ‘Barratts’ caused by years of incompetent cardiac sphincter pressure thanks to megadose of omega-3.

    Conversely, using oily fish to displace a (saturated fat rich) meat dish once or twice a week, in addition to a slow carb (low GI), fruit and veg rich Mediterraneanised diet (with some alcohol) so far appears to be the optimum dietary approach to health. Supplements don’t come close (and if you’re tempted to challenge this, I’ll provide a TTTAI #105 for you)

    so, Ben, are you right to temper scientific frustration with benevolance toward Durham’s employees directly involved in this debacle? I think NOT. Being ‘unconsciously incompetent’ is not a defence of actions -especially if you are made aware of the views of the more ‘competent’ scientific community regarding your ‘trial’. But what is worrying is that despite all the measured (and hysterical) musings around this issue, I have the sinking feeling that the now ‘consciously incompetent’ will continue…. unless central education bods take a stance.
    Any takers, Mr Alan Johnson?

  32. Robert Carnegie said,

    September 17, 2006 at 8:29 pm

    Question on paper number five. “found no significant differences between placebo and fish oil groups for motor skills, but improvements in reading and spelling.” Was that improvements for the fish oil group, or for the placebo, like that other one?

    Also… if these experiments are designed to produce false positives one time in twenty, and they measure twenty things, doesn’t that mean they expect one false positive measure each time? Or is it -that- one that’s one in twenty? And on this measure, how do they add up? I’m excusing myself from digging in to find how many variables were tested in the ones where Dr Ben didn’t specify, I guess because I don’t have enough omega-3 to stimulate my brain. ;-)

  33. Ben Goldacre said,

    September 17, 2006 at 8:53 pm

    for the fish oil group, natch. have now changed in case thats not obvious.

    the multiple comparisons issue is always very interesting and complex, different of these papers deal with it in different ways, do read them for details.

  34. Dr Aust said,

    September 17, 2006 at 9:22 pm

    “I don’t think it is just marketing schtick that the media swallow uncritically – just look at the dodgy spin put on scientific studies by university press offices”

    Think this is looking at it backwards. Univ press officers in science Depts are, indeed, typically not science -educated. But why?

    Univ Press Offices exist to generate positive coverage of the Univ in local and national press. That is the bottom line. On the whole, this means “any coverage other than hostile”. So the point of the pres office is to generate releases that will get “picked up” by the journos. The press office’s “£performance target” will usually be “number of mentions”.

    Unsurprisingly, therefore the press officers are typically ex-journalists hired because they know what the journos look for.

    To give you a real example, one very very senior Univ PR person told me “The first line has to hook them (the national science journos). The whole message has to be in there (the first line), because that’s all they read”

    No prizes for guessing how the press releases are written, then.

    And so… the ripples of dumbing-down spread outwards from journalism to anything for which the journos are the target. A self-perpetuating tide of sound-bite idiocy.

  35. RS said,

    September 17, 2006 at 9:34 pm

    Dr Aust, I think it would be naive to absolve the researchers themselves of responsibility for what goes into these press releases. Scientists can be pretty dodgy self-publicists themselves (even if most manage to avoid Captain Cyborg levels).

  36. Pro-reason said,

    September 18, 2006 at 3:31 am

    OK, these “trials” are very dodgy. But let’s not accidentally imply that Omega-3 is a load of crap altogether. Here’s another Guardian article (monbiot.com/archives/2006/06/21/not-enough-fish-in-the-sea/) with a more positive spin.

  37. coracle said,

    September 18, 2006 at 9:08 am

    Pro-reason,

    I think Monbiot is on pretty dodgy ground on this. The trials he writes about in that article are mostly the same that BG has written about here, yet he comes to different conclusions. GM is happier to leap to the conclusion that omega-3 are essential to brain development, scientists acknowledge that the evidence is mixed at best and so the jury is still out.

    Most of the studies are based on those with established learning difficulties and it does not follow that not supplementing will result in a ‘a great cognitive leap backwards.‘ for mankind.

  38. apothecary said,

    September 18, 2006 at 9:54 am

    Just to return for a moment to posts around 9 & 10 – this concept of P=0.049 = significant, P=0.051= NS is a problem in medicine, too. The idea of a probability as a continuum just doesn’t get through, and the wider statistical ignorance of many medics, pharmacists, etc just compounds the problem. For a brilliant example of what can happen, the DICE therapy for stroke paper in BMJ from years ago is a classic.

    But that was hypothetical, to make the point to students. But it happens for real:

    Example: in the PRO-ACTIVE study of pioglitazone vs placebo in people with diabetes, the primary endpoint (composite of just about all bad vascular things happening) did not reach conventional levels of statistical significance. A secondary endpoint (a narrower selection of vascular events) showed a benefit with P=0.027.

    This study is being touted as convincing evidence of benefit for pioglitazone. But of course, if the primary outcome is NS, one shouldn’t really look at very similar and realted secondary outcomes, especially with a P value similar to the chance of throwing a double six in monopoly.

    Example 2 – The CLASS study purported to show that celecoxib reduces the risk of GI bleeds in patients not taking aspirin, even though the full data (ie aspirin and non aspirin takers) showed NS differences. But, according to the FDA statistical reviewer’s report (God bless the FOI in USA), this subgroup analysis was one of 34 post-hoc analyses carried out in an exploratory manner. Presumabley they would have rejected ones which showed a benefit in certain star-sign groups (see the ISIS Lancet paper, passim)

    Sorry. Its a sore point (and I haven’t even started on realtive risk reduction vs absolute benefit etc). At the risk of an unfortunate mix of metaphors, I shall rest the sore point by getting off my hobby horse now. And refrain from long posts in the future.

  39. Dr Aust said,

    September 18, 2006 at 11:47 am

    RS –

    Agreed, I wouldn’t absolve the researchers of responsibility – some resist temptation to “talk up” their work better than others. People working in stem cells and gene therapy have been notable recent offenders.

    However, from all my and my friends experience, the PR guy/interviewer/journo always makes it very easy tempting for you to over-sell by effectively offering you the ball to hit time after time: “So the idea is that this will help cure osteoporosis, right?”.

  40. coracle said,

    September 18, 2006 at 12:35 pm

    Minor quibble, Stevens LZ, 2003 is actually Stevens L, 2003. The Z comes from the next author Zhang W.

  41. Dr Aust said,

    September 18, 2006 at 1:53 pm

    Bringing together apothecary’s post 38 and my 39, I remember a long conference speech from one of the people touting gene therapy approaches to Cystic Fibrosis a few years back.

    He admitted the gene therapy trials hadn’t shown, and were unlikely to show, any beneficial effect in mortality, life expectancy or even quality of life (primary outcomes). His solution to this: they needed to work out a whole set of different endpoints / outcomes for which the gene therapy trials WOULD show some treatment effect.

    He was, being charitable, trying to make the point that to improve something you need a sensitive measure of how well it is actually working. But for the cynical it could also be said to offer a clear example of the danger of wishful thinking (with the potential to turn into self-deception) in science:

    “We KNOW it’ll work! We just have to keep fidding until we come up with something we can measure that SHOWS it works!”

    One of Richard Feynman’s lines, quoted by Mojo on the BadScience forums, seems apposite here:

    “Science is a way of trying not to fool yourself… …The first principle is that you must not fool yourself, and you are the easiest person to fool.”

    Durham Ed Psych-ers take note.

  42. KJ said,

    September 18, 2006 at 3:21 pm

    This seemed an opportunity to ride one of my favourite hobby-horses and to correct some earlier mis-apprehensions w.r.t. significance testing.

    There are very few circumstances in which we would believe a zero effect (not small, not negligible, absolutely zero). The p-values associated with classical hypothesis testing relate to such zero differences. A significant p-value doesn’t say the effect is “real” in any meaningful sense, it just means the sample is big enough to demonstrate that the effect is non-zero, which we knew anyway! The presence or absence of significant p-values tells more about sample size than effect size.

    They will show a (statistically) significant effect if they try hard enough, that doesn’t mean it’s big enough to matter!

  43. Ben Goldacre said,

    September 18, 2006 at 3:24 pm

    can i just say, a propos of no-one in particular, that the standard of wrongness about complex statistical issues is higher here than the wrongness i encounter almost anywhere else? that’s a compliment btw.

  44. superburger said,

    September 18, 2006 at 3:40 pm

    “that the standard of wrongness about complex statistical issues is higher here than the wrongness i encounter almost anywhere else”

    You got any statistics to show that?

  45. Ben Goldacre said,

    September 18, 2006 at 3:42 pm

    it was a total compliment. philosophy of statistics is a mindbender up there with anything quantum physics can offer, i hereby give away gratis the idea of doing a simon singh type book on it, on the grounds that it would make my head hurt if i tried (and i’d get bits wrong).

  46. superburger said,

    September 18, 2006 at 3:49 pm

    There was a horizon way back when on MMR and autism.

    www.bbc.co.uk/sn/tvradio/programmes/horizon/mmr_trans.shtml

    The best bit (in my opinion) was this from Rosemary Kesick

    “The only evidence which I’ve seen cited is epidemiological. Now to me that’s statistics. Now statistics either turn you on or they don’t. I don’t, I don’t have time for statistics when I see a sick child in front of me.”

    Which says all that needs to be said about MMR/Autism/badscience in general.

  47. Ben Goldacre said,

    September 18, 2006 at 3:58 pm

    yeah, its just another expression of “science confuses me therefore it must confuse everyone therefore they must be making it up as they go along therefore i can too”.

  48. superburger said,

    September 18, 2006 at 4:07 pm

    and the way that groups with an agenda can select the bits of science that they do like and / or understand but happily ignore all the other boring stuff that doesn’t quite agree with their opinion.

  49. Ben Goldacre said,

    September 18, 2006 at 4:14 pm

    heh yup, cherry picking. “science is wrong except when a small bit of it agrees with my hunch/prejudice”

  50. Andrew Clegg said,

    September 18, 2006 at 4:24 pm

    RS: which bit of the Cohen article do you disagree with? The equation at the top of p.6?

    Andrew.

  51. Dr Aust said,

    September 18, 2006 at 4:31 pm

    “philosophy of statistics is a mindfuck up there with anything quantum physics can offer”

    ..but of course many professions require you to have some kind of basic knowledge of stats.

    ..not just scientists.

    Sample dialogue from 1st yr medical school tutorial group:

    Tutor: “What is a normal pulse rate?

    Students: “About 70 per minute”

    Sadistic tutor: “And how many people in your medical school yr gp would have a resting pulse rate of 70?… or between 65 and 75? ..how would you find out? Could you make an educated guess without measuring them all?”

    etc. etc.

  52. Ben Goldacre said,

    September 18, 2006 at 4:47 pm

    that is pretty sadistic. i usually start with “this olympic swimming pool is full of red marbles, but a few green ones here and there. how are you going to..”

  53. RS said,

    September 18, 2006 at 4:54 pm

    Andrew, just the example of known prior probability, I can’t see many real life circumstances where it is at all relevant. The whole point is we don’t know what the priori probability is (in fact, I’m not even sure it is meaningful to talk about prior probabilities like that). So I think he’s using a bit of Bayesian sleight of hand to reinforce his argument (which is otherwise cogent)

  54. Dr Aust said,

    September 18, 2006 at 5:06 pm

    It is a bit hard-core, I guess. Other ways to get them to start addressing the idea of distributions in the context of a “standard value” are to get them to all measure their own pulse rate, or to ask them whether “70 is normal” that means the bloke sitting next to them with a pulse of 50 is a freak, or is ill.

    BTW, if you think that’s sadistic… remember once being sat in an Hons viva (for end of 1st yr MBChB) with a very eminent medical social scientist. Said luminary fixed the student with a piercing gaze and then asked “Do I have the average number of arms?”

  55. RS said,

    September 18, 2006 at 5:13 pm

    Question is what kind of average is he using.

  56. KJ said,

    September 18, 2006 at 5:16 pm

    Having dropped a pebble, I’ll (gently!) chide Ben for his implication that statistics is beyond him.

    You don’t need much (any?) mathematics for the essential ideas, but IMHO those ideas are essential for anyone seriously wanting to understand data. They’re simple but deep(!) and well worth the effort. I view lay peoples refusal to try to grasp ideas important to the domain of discourse as almost as reprehensible as such an attitude in scientists. (Did that sound pompous enough?).

    The Cohen article mentioned earlier in the thread is good, I hadn’t seen it before. Personally I echo Jerry Dallal’s Tufts Home Page where (here) he says

    Perhaps the finest series of short articles on the use of statistics is the occasional series of Statistics Notes started in 1994 by the British Medical Journal. It should be required reading in any introductory statistics course. The full text of all but the first ten articles is available is available on the World Wide Web. The articles are listed here chronologically.

    Absence of evidence is not evidence of absence is something every investigator should know, but too few do. Along with Interaction 2: compare effect sizes not P values, these articles describe two of the most common fatal mistakes in manuscripts submitted to research journals. The faulty reasoning leading to these errors is so seductive that papers containing these errors sometimes slip through the reviewing process and misinterpretations of data are published as fact.

    Those two articles should be understandable by any scientist and IMHO should be required reading!

  57. RS said,

    September 18, 2006 at 5:19 pm

    KJ, I don’t think I agree that “The presence or absence of significant p-values tells more about sample size than effect size” because in very many areas of biomedicine the samples sizes are restricted to a fairly narrow range, which brings the p-value back to effect size and variance.

  58. Dr Aust said,

    September 18, 2006 at 5:27 pm

    Exactly right, RS. Really a Q to see if the victim knew any basic stats / could think on his/her feet. But of a mind-bender with no advance warning, though. I think the “subject” said something sensible after some prompting.

  59. Ben Goldacre said,

    September 18, 2006 at 5:31 pm

    kj: straight stats is not that much of a problem, its when people start getting into “what does it all mean though, really…” that things get tricky, especially once you get away from the easy stuff like a students t-test.

  60. Filias Cupio said,

    September 19, 2006 at 6:01 am

    Rosemary Kesick, quoted in post 46:

    “I don’t have time for statistics when I see a sick child in front of me.”

    My response:

    You should be thankful that somebody has had time for statistics, or else you’d be treating that sick child by bleeding her.

  61. Dr Aust said,

    September 19, 2006 at 11:00 am

    RS wrote:

    “…in very many areas of biomedicine the samples sizes are restricted to a fairly narrow range, which brings the p-value back to effect size and variance.”

    Agreed – in most laboratory studies n will be between 4 and 10 – so one rarely uses anything other than simple parametric testing w student’s t – or analysis of variance for comparing same parameter between multiple groups. And in some branches of mol biol no stats at all – if you are trying to clone “the gene for x” it either works or it doesn’t.

    The complexity of the statistics in proper trials comes as a definite surprise to people encountering them for the first time…. as usually do the very small effects.

  62. RS said,

    September 19, 2006 at 11:21 am

    4 and 10! I’ll have you know I can reach as high as fifteen!

  63. KJ said,

    September 19, 2006 at 1:02 pm

    Hi RS. You said
    —————
    ‘I don’t think I agree that “The presence or absence of significant p-values tells more about sample size than effect size” because in very many areas of biomedicine the samples sizes are restricted to a fairly narrow range, which brings the p-value back to effect size and variance. ‘
    ————
    I agree that IFF I know sample size then the p-value tells something about the relative magnitudes of effect size and variance, but surely that doesn’t gainsay the phrase you quoted?

    In the context of this thread I see “in most laboratory studies n will be between 4 and 10″, “63 kids”, “41 children”, “50 children”, “40 subjects”, “117 subjects”. Then we have ‘the trial that ate itself’ with a planned 5000! Admittedly that last trial seems to have more fundamental failings, but aotbe would have much more power than the others. Assuming that the others were appropriately sized to detect effects of clinical significance (I can’t judge that) a (reasonably designed!) 5000 size trial would detect clinically insignificant effects with very high statistical significance (low p-values). This is what I meant when I said

    “They will show a (statistically) significant effect if they try hard enough, that doesn’t mean it’s big enough to matter! ”

    (By the way, albeit not in a biomedical area, I often work with sample sized of thousands).

  64. stan sharred said,

    September 19, 2006 at 1:03 pm

    I work as a teacher in a hospital setting.

    Recently received free samples of snake oil- sorry fish oil.

    Do these people really think that we (teachers) are going to start dispensing these thinks?

    They were despatched to pharmacy for distruction.

  65. stan sharred said,

    September 19, 2006 at 1:03 pm

    sorry, destruction

  66. RS said,

    September 19, 2006 at 1:22 pm

    Sample sizes of thousands, bah, that’s not science, that’s physics, psychology, or (eugh) sociology.

    I agree that when you have these sorts of massive disparities in sample sizes, and in medical studies in general, concentration on p-values is misleading. I was just defending the use of p-values in science in general, and I was thinking particularly of less applied fields, because the argument that with a big enough sample sizes eventually a significant difference will be found is purely hypothetical and doesn’t necessarily apply in practice, and in fact, with the generally small sample sizes found in (non-applied) biomedical research, significant p-values are fairly meaningful.

  67. RS said,

    September 19, 2006 at 1:26 pm

    KJ, so what I am disagreeing with is “A significant p-value doesn’t say the effect is “real” in any meaningful sense, it just means the sample is big enough to demonstrate that the effect is non-zero, which we knew anyway!” But I think we all know what everyone else is talking about here, it’s just what way we choose to spin it to make a particular point.

  68. RS said,

    September 19, 2006 at 3:18 pm

    Wow, that’s another post I’ve seen appear higher up the list after a big delay (e.g. #64, #56) – do they get lost in the system or is there a set of keywords that initiate manual moderation by Ben?

  69. Ben Goldacre said,

    September 19, 2006 at 3:30 pm

    “I work as a teacher in a hospital setting. Recently received free samples of snake oil- sorry fish oil.”

    dude – what brand were they, and what promotional material came with them?

  70. Ben Goldacre said,

    September 19, 2006 at 3:32 pm

    “Wow, that’s another post I’ve seen appear higher up the list after a big delay (e.g. #64, #56) – do they get lost in the system or is there a set of keywords that initiate manual moderation by Ben?” some do yeah, from the days before registration, i should get round to switching it off but there are a few spammers around and they can do a lot of damage very quickly. god knows what the triggers youre hitting are.

  71. pv said,

    September 19, 2006 at 6:18 pm

    Evidence-biased said, amongst other things in the other thread (www.badscience.net/?p=297 )

    “September 17, 2006 at 3:04 pm

    As leading author on the Oxford-Durham study (Pediatrics 115 (5) 1360-1366), I’ve tried to stand back from this debate – but for the benefit of #98 and #105, I must make clear that:

    1) Durham LEA have all the data collected for our trial, so if they wanted to publish any further papers, they’re very free to do so. (By contrast, some questionnaire data was never provided to the Oxford researchers; and the final data on motor skills – a primary outcome measure – only came across in summer 2004). To suggest that my well-known dislike of Equazen’s promotional activities makes it ‘unlikely that other papers will be released’ is thus misleading in the extreme. I do wonder why Durham LEA haven’t got on with this themselves – but presumably they’re too busy. Or maybe it’s because they know we’d insist on checking their sums.

    2) Unfortunately, media attention dogged our study from early 2002 when data collection first began. The Oxford researchers repeatedly asked Durham LEA to desist from such activities until this trial was completed, analysed and published – but to no apparent avail. This constant premature leakage of information to the media, and its apparent exploitation for commercial purposes, was the reason why our previously good relations with Durham LEA broke down….”

    The last couple of sentences speak volumes as to the (lack of) credibility and integrity of the Durham LEA. If I were a parent with a child at a Durham LEA school I would be demanding an explanation at the very least.

  72. cath having fun said,

    September 19, 2006 at 8:58 pm

    Durham seem to be obsessed by spin at the expense of science – never one to let facts get in the way of a hyped story – but why this approach, when a more impartial, objective and scrutinised trial would generate more kudos (iespecially f fish oil were proved useful)?

    Hang on – isn’t Sedgefield part of County Durham?

    …and isn’t the Rt Hon Tony Blair PM, MP for Sedgefield?

    I make a ridiculous hypothesis for the Daily Mail Health Section that it must be something in the drinking water. that prevents a rationale and objective response to questions asked …..

  73. stever said,

    September 19, 2006 at 11:49 pm

    fuck. ive battled through all that, and Im now too shattered to contribute.

    *bed*

  74. Coobeastie said,

    September 20, 2006 at 8:37 am

    Ben – you have another fan :)

    www.fabresearch.org/view_item.aspx?item_id=1019

  75. RS said,

    September 20, 2006 at 1:13 pm

    Isn’t that the poster Evidence-biased?

  76. stan sharred said,

    September 21, 2006 at 3:26 pm

    Ben,

    Sorry, can’t remember which company it was, since I received them quite a few months ago.

    The accompanying literature was of the “Evidence/research shows” and how beneficial they would be.

    If I receive anymore, I’ll let you know…

  77. jdc325 said,

    September 28, 2006 at 9:18 am

    Come on now, you can’t compare fish oil to snake oil – everyone knows that snake oil (aka Echinacea) is a placebo, whereas fish oil has significant evidence behind it… for certain conditions NOT including “may help to maintain concentration” or any similar schoolchildren-related claims. Omega 3 fatty acids are almost certainly beneficial in depression (references available if you wish to review) but the only study I have seen into omega 3 fatty acids and concentration / school performance that I felt I could trust was the Oxford-Durham trial. Don’t worry – as they say “it will all come out in the wash”. The Common Position (EC) No. 3/2006 will lead to an EU regulation (instantly enforced, unlike directives that must be enshrined in member state law first) that will require nutrition and health claims made on foods (including ALL supplements) to be backed up by generally accepted scientific evidence – or the claims to be removed. Let’s see how many claims are removed – I would ‘guesstimate’ that the majority will have to go, as the standards are quite stringent and claims such as “may help to maintain concentration” etc… etc… will be completely barred. I don’t believe that anyone should dismiss use of fish oil / a diet rich in fish as being of no benefit due to their own prejudices as that science is just as bad as the ‘science’ promulgated by certain unethical souls (or businesspeople as they prefer to be known).

    Regards.

  78. wayscj said,

    November 23, 2009 at 12:37 am

    Laptop Battery Laptop Battery Laptop Batteries
    Laptop Batteries discount laptop battery
    discount laptop battery
    notebook battery notebook battery
    computer battery computer battery
    replacement laptop battery replacement laptop battery
    notebook batteries notebook batteries

  79. jiangjiang said,

    December 8, 2009 at 2:06 am

    ed hardy ed hardy
    ed hardy clothing ed hardy clothing
    ed hardy shop ed hardy shop
    christian audigier christian audigier
    ed hardy cheap ed hardy cheap
    ed hardy outlet ed hardy outlet
    ed hardy sale ed hardy sale
    ed hardy store ed hardy store
    ed hardy mens ed hardy mens
    ed hardy womens ed hardy womens
    ed hardy kids ed hardy kids ed hardy kids