Nullius in verba. In verba? Nullius!

June 30th, 2010 by Ben Goldacre in authority, bad science, guardian, media, open methods, show your working | 36 Comments »

Hi there, just back from Glastonbury, here’s my column from last Saturday. The Guardian didn’t take it, they said it was too soon to be critical of a Guardian journalist after the column on fish oil, and the issue was too technical. I’m not prone to melodrama, so I don’t see this as a big thing, but I was a bit baffled by the insistence on experiencing this column as critical, when it’s not written that way, and I don’t think it reads that way either.

I hope it raises an interesting point about whether it’s appropriate to present a highly technical analysis and dataset, seeking to influence national health policy, but in lay media alone, rather than an academic journal, and without any detail on scientific methods being available. I sent a copy to Peter Holt, the academic surgeon who did the AAA analysis for the Guardian, and he says “I think that it is actually quite a good article and makes a valid point.”

Your mileage may vary etc.

Nullius in verba. In verba? Nullius!

Ben Goldacre, Not In The Guardian, Saturday 26 June 2010

Here is some pedantry: I worry about data being published in newspapers rather than academic journals, even when I agree with its conclusions. Much like Bruce Forsyth, the Royal Society has a catchphrase: nullius in verba, or “on the word of nobody”. Science isn’t about assertions on what is right, handed down from authority figures. It’s about clear descriptions of studies, and the results that came from them, followed by an explanation of why they support or refute a given idea.

Last week the Guardian ran a major series of articles on the mortality rates after planned abdominal aortic aneurysm repair in different hospitals. Like many previously published academic studies on the same question, they again discovered that hospitals which perform the operation less frequently have poorer outcomes. I think this is a valid finding.

The Guardian pieces aimed to provide new information, in that they did not use the Hospital Episodes Statistics, which have been used for much previous work on the topic (and on the NHS Choices website to rate hospitals for the public). Instead they approached each hospital with a Freedom of Information Act request, asking the surgeons themselves for the figures of how many operations they did, and how many people died.

Many straightforward academic papers are built out of this kind of investigative journalism work, from early epidemiology research into occupational hazards, through to the famous recent study hunting down all the missing trials of SSRI antidepressants that companies had hidden away. It’s not clear whether this FOI data will be more reliable than the Hospital Episodes numbers – “discuss the strengths and weaknesses of the HES dataset” is a standard public health exam question – and reliability will probably vary from hospital to hospital. One unit, for example, reported a single death after 95 emergency AAA operations on FOI request, when on average about one in 3 people in the UK die during this procedure, and that suggests to me that there may be problems in the data. But there’s no doubt this was a useful thing to do, and there’s no doubt that hospitals should be helpful and share this information.

So what’s the problem? It’s not the trivial errors in the piece, although they were there. The article says there are ten hospitals with over 10% mortality, but in the data there are only 7. It says 23 hospitals do over 50 operations a year, but looking at the data there are only 21.

But here’s what I think is interesting. This analysis was published in the Guardian, not an academic journal. Alongside the articles, the Guardian published their data, and as a longstanding campaigner for open access to data, I think this is exemplary. I downloaded it, as the Guardian webpage invited, did a quick scatter plot, and a few other things: I couldn’t see the pattern for greater mortality in hospitals that did the procedure infrequently. It wasn’t barn door. Others had the same problem. I received a trickle of emails from readers who also couldn’t find the claimed patterns (including a professor of stats, if that matters to you). Jon Appleby, chief economist on health policy at the King’s Fund, posted on Guardian CommentIsFree explaining that he couldn’t find the pattern either.

The journalists were also unable to tell me how to find the pattern. They referred me instead to Peter Holt, an academic surgeon who’d analysed the data for them. Eventually I was able to piece together a rough picture of what was done, and after a few days, more details were posted online. It was a pretty complicated analysis, with safety plots and forest plots. I think I buy it as fair.

So why does it matter, if the conclusion is probably valid? Because science is not a black box. There is a reason why people generally publish results in academic journals instead of newspapers, and it’s got little to do with “peer review” and a lot to do with detail about methods, which tell us how you know if something is true. It’s worrying if a new data analysis is published only in a newspaper, because the details of how the conclusions were reached are inaccessible. This is especially true if the analysis is so complicated that the journalists themselves did not know about it, and could not explain it, and this transparency is especially important if you’re seeking to influence policy. The information needs to be somewhere.

Open data – people posting their data freely for all to re-analyse – is the big hip new zeitgeist, and a vitally important new idea. But I was surprised to find that the thing I’ve advocated for wasn’t enough: open data is sometimes no use unless we also have open methods.


++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

36 Responses



  1. skyesteve said,

    June 30, 2010 at 11:46 am

    Ben – whilst I’m all in favour of open data it’s of little use to the lay public (and even many professionals!) if they do not have the knowledge or skills to interpret it. It’s what we see all the time with risk reported in the tabloid press which ends up making the public feel that some one in 10 million thing is bound to happen to them (although, of course, that’s the same logic as buying a lottery ticket and millions of them are sold each week).
    I would suggest that the majority of doctors, for example, don’t know how to critically read and appraise an article in the Lancet, BMJ or NEJM, so that even publishing in an academic journal is no guarantee of fair analysis and assessment!

  2. Gordon.Comstock said,

    June 30, 2010 at 12:29 pm

    Hi Ben,

    Long time reader first time poster.

    I was also at Glastonbury and was disapointed not to find my dose of Bad Science in the Staurday Guardian. I was labouring under the misapprehension that it was just the festival edition which omitted your piece so thankyou for the clarification.

  3. Symball said,

    June 30, 2010 at 12:31 pm

    Ahh- it appears- I tried the badscienceblogs link a few days ago and wondered what had happened. Shame about the grauniad, but i guess Laurences criticism obviously hit a nerve, how dare you moan about bad science in a bad science column!

  4. jehodge said,

    June 30, 2010 at 12:31 pm

    It’s true that open data is sometimes no use unless we also have open methods, but this doesn’t mean it can’t be published in a newspaper. The guardian needs to broaden their focus and make sure that their journalists understand and share methods as well as data. This is especially important in complex cases such as this. Certainly makes a valid point. Shame they were too spineless to publish it.

  5. jehodge said,

    June 30, 2010 at 12:33 pm

    PS: I am also a longtime reader firsttime (now secondtime…) poster.

  6. Matt Keefe said,

    June 30, 2010 at 12:46 pm

    As a non-academic writer currently researching a book on disease, I can say that data appearing in newspapers has one substantial advantage over that published in journals – it’s freely available to those of us who aren’t affiliated to large institutions. Open data is certainly the way to go but, and I agree with your premise, if it is compromised by lack of an accompanying analytical method, then the places where we can find those methods – i.e., academic journals – need to be a little more open too.

  7. twaza said,

    June 30, 2010 at 1:00 pm

    The Guardian is misunderestimating the intelligence of their readers.

    “Nullius in verba” is a great maxim. Unfortunately it is not possible for an individual to check the raw data behind every claim. Individuals have to rely on other individuals to do most of the checking for them. This can only happen if there is a social system to organize the work; worker bees critically appraising the evidence for the hive.

    Science (and science-based medicine) is organized like this. Journalists and the media ought to be contributing members of the hive, not parasites living off the honey and obstructing the worker bees.

  8. gozdez said,

    June 30, 2010 at 2:31 pm

    Hi @bengoldacre

    You make some very interesting points, Ben. I’m glad our investigation inspired you to write a post.

    You are right to point out that methodologies should accompany data. That’s why we had planned all along to publish our graphs/stats/methods on the same day as our investigation but there were technical problems. A couple days later, it was possible to get them uploaded so we did.

    For those who haven’t seen:
    www.guardian.co.uk/society/series/holt-analysis

    I’d like to make a point that there’s also other research in peer reviewed journals that policy experts can turn to – as well as speaking to the National Vascular Society and the Royal College of Physicians who have shown us support as well as cancer specialist Professor David Kerr.

    www.guardian.co.uk/society/2010/jun/27/cancer-specialist-treatment-outcomes-unrecorded

    www.guardian.co.uk/uk/2010/jun/18/patient-death-rates-revealed

    Accurate data is not being collected properly so improvements cannot be sought for patient care.

    The finding that hospitals that did more of a procedure were better than those that did less was almost incidental to that – but it does bolster the public policy issue of reconfiguration – which Andrew Lansley has put on hold. Therefore it is very valid for a newspaper to run a story that suggests he might be wrong.

    The National Vascular Society,which runs the National Vascular Database, has been encouraging all surgeons to submit accurate data – but less than half do.

    We hope our investigation sheds light on this problem with data collection – time for change is long overdue but ultimately can only be brought about by doctors, surgeons and health policy experts. Not journalists.

  9. oakdale said,

    June 30, 2010 at 6:12 pm

    The article was indeed interesting, and I’d just like to say that I too wondered how they’d got from the data to the conclusion. Even when I saw the supplementary web pages I didn’t feel much better.
    Curiously, when a great friend of mine was offered an elective AAA job, I could swear he was given a higher risk than 10%, though a lot less than if it had ruptured, obviously.

    Perhaps the piece should have been more positive up front: Great job! (Shame about the method). Or something.

  10. thetallerpaul said,

    June 30, 2010 at 7:51 pm

    Ben, unlike some of the above I am a first time reader and poster having only found your excellent work via your book. I feel happy in my ignorance knowing there are people like you out there. I find adverts for the first ever shampoo to contain pearl extract with 58% more shineappeality enraging enough so to see what you see might be the death of me.

    It seems a shame to be trivial on first post but the admission that you were at Glastonbury along with the column no show sounds like tardy submission as opposed to editorial decision?

    Suspicious? Possibly. Irrelevant? Certainly.

  11. Moganero said,

    June 30, 2010 at 9:03 pm

    I logged on early on Saturday and read this item – when I went back later it was gone. Wondered if you’d had to pull it because you upset someone the Grauniad didn’t want to take on! Glad to see it’s back.

  12. niko123456 said,

    July 1, 2010 at 4:37 am

    I think in practice, academic journals just shift the ‘black box’ by using a citation – its not ‘this is how I found/discovered/analysed my data’, its ‘here is a shorthand link to a bunch of other data other people have agreed is correct which I will use as the basis of my assumption’.

    So I think the sort of flippant way you passed over the peer review aspect does academic journals a disservice. you should be looking to put more emphasis on the ‘peer review’ as a group of living, breathing, humans who should be accountable for their opinions (authority figures?).

  13. drunkenoaf said,

    July 1, 2010 at 8:37 am

    @niko

    When academics hive off their methods section as “as described previously, Bloggs et al, 2000”, it’s not nefarious, it’s just time and space saving. IT’S not they are hiding anything; often it’s because it’s the same old study again with different samples… or that a standard technique is a standard technique. The arse is that few of these papers are open access. Tough if you’re not an academic.

  14. iucounu said,

    July 1, 2010 at 2:01 pm

    I prefer Bruce Forsyth’s old catchphrase.

  15. Dr Spouse said,

    July 1, 2010 at 2:20 pm

    This reminds me of the YouGov poll on children’s early language that was widely reported a few months back – again you could get the data (if you dug a little) and in this case, though the proportions reported seemed to tally with most of the data, the methods as far as it’s possible to tell were really poor – and not reported well enough to tell exactly how poor they were.

    [BBC article: news.bbc.co.uk/1/hi/education/8436236.stm
    My own reanalysis if you are interested: evidence-based-parenting.blogspot.com/2010/01/one-in-six-children-have-difficulty.html%5D

  16. Logas2 said,

    July 1, 2010 at 3:59 pm

    @drunkenoaf

    Its not nefarious but can be really irritating. Especially when even if you have institutional access, you can end up going through a whole chain of “as described previously, Bloggs et al, 2000″ leading to “as described previously, Smith et al, 1996″, etc. Only for it to turn out that the paper you actually want is too old to be available online, and the only copy of the journal you need is in some forgotten corner of a library you’ve never been to. Then when you get there its mysteriously gone missing…..

  17. clareske said,

    July 1, 2010 at 11:01 pm

    This is a long standing problem in the humanities. Political scientists, ethicists, linguists are all held to higher standards than their journo counterparts who publish new ideas every day; blogs abound complaining of shonky standards in these fields too. Political commentators, for example, provide new analyses without an ounce of the rigour a political scientist would be expected to devote to the same argument.

    In both cases those working in academia are subject to peer review, a pathway to their sources, methods, footnotes etc. Those writing in the press aren’t. The bad things in this article apply in both instances: method/sources can’t be scrutinised, yet both wish to affect public policy while being subjected to markedly different standards. What’s the answer?

    Forbidding or completely disregarding political analysis in the papers and restricting our attention to political science journals and the people who report on them is palpably absurd — even more absurd than asking health/science policy makers to restrict their information sources to reputable peer reviewed scientific journals.

    I think all we can do (and ask policy makers to do) is just acknowledge that it’s a different forum, to always be alert to the different standards of scholarship and bear these in mind when drawing any conclusions from them. Newspaper columns are just starting points for further thought or research. It’s a bit new for the natural sciences, since there’s not nearly as much new research published outside of academia, but political scientists are long used to it and others like linguists (such as our friends at (the) languagelog.com) seem to have a comfortable tolerance of the amateur linguists writing in newspaper columns as a different breed of commentator. Even if with exasperation they have to tear giant shreds through the stinking garbage some of worst commentators produce from time to time. It’s a hard life.

  18. richardt63 said,

    July 2, 2010 at 11:20 am

    Off topic a bit but skysteve mentioned risk and how it not understood by many people. True, but the lottery is a little different: every week someone normally wins the jackpot (because of the number of tickets sold); every week normally no-one is reported in every national newspaper to have been killed by lightning.

  19. skyesteve said,

    July 2, 2010 at 2:14 pm

    @richardt63 – absolutely correct but your risk of being struck and possibly killed by lightening in the UK is, perhaps, still higher than winning the lottery with a one-off ticket!
    TORRA (The Tornado and Storm Research Organisation) reckons there are between 30 and 60 people struck by ilghtening each year in the UK – that’s a risk of 1 in 1 – 2 million. Of these strikes about 10% will be fatal. so at the top end your risk of being killed by lightening is about 1 in 10 million. A single set of lottery numbers has a chance of about 1 in 14 million of winning.
    Of course, if you buy more numbers and buy more tickets you chances of winning rise. But by the same token if you chose to go out in a thunderstorm and swing a golf club around in a flat open field your chances of being killed by lightening would also rise dramatically!
    My problem is that 95% of people are ignorant of the meaning of things like relative risk and actual risk and the press continually present things in a way that makes everything seem equally risky. It just depends what the latest whipping boy is – urban foxes anyone?

  20. neo_nutmeg said,

    July 2, 2010 at 10:07 pm

    This article highlights a very important issue, especially when statistical methods are becoming more and more complex. Within some fields, recording all your analyses in a script that is run in statistical analysis software is becoming more common place. Ideally, I think these should be published alongside academic papers (and maybe even refereed? – although one would have to be familiar with the specific software…). Programmes such as R, SAS, and Matlab all encourage such ‘reproducible research’.

  21. mch said,

    July 3, 2010 at 12:34 pm

    Open science does indeed require more than ‘just’ publishing data. As ben says there at the end you need to ‘show workings’ too, but there’s other things too. The provenance and adjustments made to the data – the full history of it if you like. And, like properly commented code, the methods require similar support.

  22. sarahboseley said,

    July 3, 2010 at 12:56 pm

    I wasn’t going to comment, but since I gather Ben has been discussing this in public, I thought I’d better make a couple of points.

    Firstly, it was the decision of the editor of Ben’s column not to run it in the paper. Yes – I understand he did think it was too soon for another pop at a Guardian story, but the suggestion that we might hold it out came from Ben himself. This is what Ben wrote in an email to that editor, and copied to me:

    “i can only imagine that you’d find the column boring, more than anything else, and i’m very happy to find it a home somewhere else and skip a column for a week.”

    My own view was that Ben had completely missed the point of the story, which I felt and still feel is very important – that most doctors do not collect good data about their treatment outcomes. Maybe Ben does – but most do not and therefore cannot compare their performance with their peers. Post-Bristol, the exception is the cardiac surgeons. Many people think and have said that all doctors should be doing this and sharing it with the public. If they don’t, patients cannot make an informed choice of where to be treated. The statistical analysis is secondary – although it illustrates why keeping good data is important.

    And my other concern is that Ben labeled this a “dodgy story” in a posting on the day of publication. Once he saw the graphs we published online (I wanted them up on the day but the relevant people could not figure out a way to do it straight off) he could no longer think that. But unfortunately, mud sticks. I think he should retract the remark or post a correction and have told him so. Regrettably, he hasn’t done it. Instead, he decided to argue that newspapers should leave this sort of thing to academic journals. I profoundly disagree.

  23. mch said,

    July 3, 2010 at 2:06 pm

    @sarah, I don’t know anything about the internal arguments, but as an outsider I don’t read ben’s article as a preference for leaving these things to academic journals. Rather, that journalists – including ben – have some way to go yet before understanding what academic journals ideally do.

    Frankly I think very few academic journals are all that brilliant anyway; there seems to be some idealised impression of how academic science is supposed to work that is rarely matched in reality. That’s one of the reasons many of us want to see research forced into real openness, and I feel that both your work and ben’s somewhat valid criticism of it are excellent steps in the right direction.

  24. Ben Goldacre said,

    July 3, 2010 at 3:07 pm

    Hi Sarah

    I write a lot about problems and errors in a way that I hope is informative, but I’m not an enthusiast for personal disputes with individual people, as I don’t think they’re interesting to readers, or informative on the wider issues, which interest me more, so I’m sad to be writing this. Also, I think you’re a good thing, and I’m not aware of having criticised you, either above or in my posterous post.

    I’m not sure it’s entirely appropriate for you to post the content of part of an email I sent to my editor at the Guardian. This email was explaining that I wouldn’t experience any drama if people really didn’t want to print it – as they were saying they might not – since after a long series of various conversations and emails it was clear that publishing my column was going to be controversial, and that it was being experienced as critical, which I don’t think it is. Others can read it above and make their own conclusions.

    You were angry, threatening to write an article about what a bad man I am, copying your emails to people at the Guardian I’ve never heard of, declining to answer questions about aspects of the piece saying you were away at a conference and so couldnt access the information (which was online) etc. Meanwhile I was in a field in Glastonbury and plainly unable to battle on that front. As I said, I’ve no desire to get into a personal dispute, as I have said in almost every email to you, I think you’re a good thing, and have no criticism of you personally.

    The discussion you refer to on posterous, between me and various others, describes how we downloaded the data from the Guardian site, as we were invited to do, and looked for the pattern in the data you provided, as we were invited to do, and couldn’t find it. It also describes how we did see the pattern, once the complex analytic strategy was eventually revealed. I think that discussion was entirely fine, reasonable and appropriate, and I encourage people to read it for themselves if they’re interested. It has not been edited or changed, I don’t think that is necessary.

    bit.ly/9g8Fmr

    Lastly, the column may well be dull. I’m often genuinely surprised that the Guardian print my stuff, to be honest, and I have been for 7 years now. On the other hand I think the piece above does raise a vaguely useful point, I’m pleased that Peter Holt thought so, and that it was reposted on the Open Knowledge Foundation website, as well as elsewhere. I do think there are interesting new problems raised when publishing a complex new data analysis – so complex the journalists themselves could not understand or explain it – only in a mainstream newspaper, with the methods being inaccessible, especially on a political issue where people are campaigning to change policy, and so transparency is even more paramount than usual.

    As I said, I’ve no interest in entering into personal disputes with anyone, I don’t have any criticisms of you, and I hope that in trying to explain this I haven’t inflamed things further. Naturally, as I have said here and elsewhere, I think it is vitally important that all healthcare workers collect good information on activities and outcomes, both to improve practice, but also to conduct research, which in turn will itself improve practice.

  25. gimpyblog said,

    July 3, 2010 at 3:11 pm

    @sarahboseley

    I’m intrigued by this
    he decided to argue that newspapers should leave this sort of thing to academic journals. I profoundly disagree.

    Academic journals demand peer review and invite comment post-publication. If you think novel findings should be reported first in newspapers then the very least you can do is expect review and comment as well as providing the same information that journals provide; methods, references, and declarations of interest. Otherwise your output will be inferior, but ironically with wider impact, a dumbing down of the scientific method ultimately for profit.
    If journalists want to be researchers, then they should be held to the same standards.

  26. Dr Jim said,

    July 3, 2010 at 3:32 pm

    I agree with Ben and Gimpy on this one. Whilst I have a generally positive opinion of the piece I think if newspapers are going to be make forays into publishing research pieces such as this, then they must accept the often scathing criticism that accompanies many peer-reviewed academic papers.

    I have both received (and prepared) reviews that were critical of either results, interpretations or methodologies, of published scientific research; this is just part and parcel of the peer-review process. I would invite Sarah, and any other journalist interested in engaging in this type of work, to have a look at a journal such as the EMBO Journal, which now publishes the full record of the peer-review process, warts and all.

  27. Some Dude said,

    July 4, 2010 at 12:49 am

    Hello, long time reader first time writer. Loved this one Ben… I hope the Guardian have the courage to print it when the times right.
    You’re, as usual, right on the ball with this one. I think anyone who is prepared to print on these subjects should likewise be prepared for the consequences of their actions; particularly when they get it all backwards. It all makes me glad there are people like you out there still reading and writing the good fight for truth in the torrid ocean of nonsense that permeates everywhere these days.

  28. Xobbo said,

    July 5, 2010 at 8:11 am

    @sarahboseley

    “And my other concern is that Ben labeled this a “dodgy story” in a posting on the day of publication. Once he saw the graphs we published online he could no longer think that.”

    But that’s exactly what it already says on the posting you’re referring to! What else is he supposed to retract?

  29. Dan Kimberg said,

    July 5, 2010 at 2:19 pm

    Scientific reporting isn’t perfect anywhere, whether it’s in top-tier scientific journals, regular newspapers, blogs, or the walls of bathroom stalls. But the hierarchy is pretty clear. There are very good and extremely important reasons for peer review, even if it’s far from perfect. I’m profoundly suspicious of anyone who claims to have an important scientific finding and chooses to sidestep this process.

    Ben, contra the assertion in your article, this has everything to do with peer review. One of the most important responsibilities of the reviewer, as you well know, is to render judgment on whether or not the report is appropriately detailed. At least in principle, a report that is missing critical details concerning the analysis will not make it into any reputable scientific journal. Far from being a new concept, “open methods” has long been one of the most important parts of scientific reporting, and of course of scientific reviewing. The fact that this would strike some as a novel insight is in itself a very good reason for not leaving this important process to newspapers.

    I want to add, in response to some of the comments, that the purpose of adequate detail in scientific reports isn’t so that everyone can evaluate a claim properly. It’s so that someone can.

  30. mch said,

    July 7, 2010 at 9:40 am

    @Dan I’d love to know what those extremely important reasons are for limited journal peer review, that make it somehow better than wider and more open public review.

    “Open methods” is indeed nothing new in a few fields, but it’s not at all universal in scientific reviewing, and by bringing studies directly to the public this shortcoming is more widely identified and broadcast. As here.

    Adequate detail so that someone can evaluate a claim is a Good Thing. Openly publishing it so that some people can do so is Better.

    If this newspaper is not very good at it on the first few tries, that can be fixed through experience and lessons, not abandoned for the cosy closed world of journals.

  31. chinaphil said,

    July 9, 2010 at 6:08 am

    I think I’m slightly with Sarah on this one. It feels a bit as though Ben’s allowing the best to be the enemy of the good.
    Newspapers aren’t ever going to be journals. Journalists aren’t ever going to be scientists/statisticians. But a newspaper that (a) goes and gets interesting new data, (b) gets it analysed in a responsible way, (c) makes the data and the method of analysis as available as possible (in this case, by telling you who did the analysis), is doing a pretty damn good job.

    The point is not that Ben conducts independent analysis of the data and criticizes the conclusions. That’s part and parcel. The point is that the journos did this – as Ben says – “exemplary” work, and they’re still getting labeled “dodgy”. If you want to claim academic disputation rights, then carry out your disputation in an academic manner. If you want to have a go at the journalists for being muppets – as often happens – then call them dodgy. But Ben seems to have combined the two here in a way that I find a bit unappetising.

  32. quaoar said,

    July 12, 2010 at 12:51 am

    As a survivor, in the US, of planned AAA surgery, I am suspicious of your stats on “emergency AAA surgery”. The abdominal aorta is the largest oxygenated blood vessel in the body. If an abdominal aorta aneurysm ruptures in even a small extent, the subject bleeds out very rapidly. The stats in the US, so far as I can find, state that less than 5% of “emergency AAA surgeries” are survivable, because the body bleeds out so rapidly that survival is definitely not assured due to time constraints. Anyone touting a 30% survival rate for “emergency AAA surgery” is smoking something I want to smoke!

  33. Guy said,

    July 12, 2010 at 10:08 am

    Quaoar, I think there is a confusion of terms here. To you emergency is someone collapses in a shop with abdominal pain and has a bleeding AAA = v low survival rate.
    The figures quoted are for unplanned. Planned is where I get my AAA scanned for a number of years. It starts at 5cm then 6 and when it gets to 7cm, they plan an op. Get me in and mend it.
    The emergencies are the ones where I start to get some symptoms from it beginning to dissect eg back pain, and they know I’ve got a AAA. They then get me in quickly and do the operation that same day.
    This is how I understand it and I hope the info helps.
    Guy
    ps glad to see you survived.

  34. fifecircle said,

    July 12, 2010 at 1:38 pm

    Rather than “damning with faint praise” I think Ben was guilty of “complementing with harsh criticism”.

    We all like to think our work is perfect – especially when that work is the criticism of others.

    I am sure there a number of surgeons shouting “I do my best. Do you think I am trying to kill the patients? I work in a small hospital, and do a whole variety of operations. You’re using industrial grade statistics to prove what exactly?” Sauce for the goose is sauce for the gander.

    This was a piece of investigative journalism carried out in a scientific manner rather than a science piece.

    To me Dr Goldacre’s piece is a fairly constructive piece of criticism (both pro-and-con) of a useful and relatively unusual piece.

    The statement that the article was “dodgy” is a bit like blaming the journalist for an over ambitious headline (and I believe sub-eds write them anyway).

    I think a bit of lively debate helps make the whole thing a bit more interesting. If science in newspapers is only based on rehashes of journal papers even I would be bored witless.

    Every day I seem to read journalists who are struggling with difficult science, trying to dumb down stories about science which they obviously understand themselves. This was a refreshing change from both camps.

    Hopefully all this won’t put Dr Goldacre off from posting when out and about. I suspect, by the terseness, laissez-faire and poor capitalisation, from a phone in the middle of a field!

  35. quietstorm said,

    July 14, 2010 at 6:35 pm

    Sometimes I wonder whether this may be simply a vocabulary issue, I genuinely did not read anything in the article which suggested that the journalist(s) involved was (were) at fault, simply that things could be done better in these types of instance. I didn’t interpret this sentence “The journalists were also unable to tell me how to find the pattern” as a criticism, merely a statement of fact. I wouldn’t expect a journalist to be able to tell me necessarily how some analysis was done, I would have assumed that they had asked someone else to do it. My summary of the entire piece would be “isn’t it fantastic that we now have access to some of this data to see for ourselves, but it turns out that the analysis is often complicated” along with “doctors/surgeons/hospitals need to keep more and better data”. But then, it is possible to read criticism into any suggestion, especially when it’s your work in question (it’s only lately that the peer-review process has become less of an emotional roller-coaster for me!)

    I’ve always seen the online Guardian material as being the best step so far to the kind of journalism I want to have access to: the supplementary material for an article gives me much more information should I wish to find out about the subject for myself. The responses and comments can then refine what is added and whether something needs to be changed. Some academic journals could learn from this, the comment/reply format often takes far too long, and the “supplementary material” options in many journals are not used fully.

    AS far as I’m concerned, the Guardian are streets ahead regarding this kind of online publishing, and I hope that they will eventually take these kinds of suggestions on board!

  36. Guy Chapman said,

    July 15, 2010 at 5:02 pm

    Interesting point about availability of source data. There is a very widely cited study on cycle helmets by Thompson, Rivara and Thompson, it is the source of the 85% / 88% figures often quoted by helmet advocates (which are junk, but that’s another and much longer story).

    The authors once released their source data to a statistician, I believe Dorothy Robinson, who examined the figures and found that the protective effect was almost equally visible in reduced leg injuries (I think the figure was 79%, which is about the same as the authors’ uncorrected figure for head injuries). Since this was made public the authors have never released their source data again, as far as I can tell for any of their work. This is important because their work dominates the Cochrane review on the subject (which, incidentally, they wrote, with that data set included twice).

    More people should release source data. Show your working, people!