Pay to play?

February 14th, 2009 by Ben Goldacre in competing interests, MMR, regulating research | 46 Comments »

Ben Goldacre
Saturday February 14 2009
The Guardian

This column is about tainted medical research, not MMR. Now don’t get me wrong: it’s still an interesting week to be right about vaccines. On Sunday, Brian Deer at the Times claimed that the medical cases in Andrew Wakefield’s 1998 paper were altered before publication. The measles figures came out: they’re up by 2000% over the last 7 years, and rising exponentially. On Friday, the Autism Omnibus court hearing in the US – a massive two year case involving 5,000 children – ruled there is no evidence for MMR causing autism (nor for the mercury preservative thimerosal). On Thursday a paper was published describing a measles outbreak among a group of anti-vaxxers in France, with one death, and the cases are spreading. And the radio station LBC behaved so abysmally over one irresponsible broadcast – you can read the gory details at – that 140 blogs covered the story, and there is even an Early Day Motion about it in parliament (number 754, contact your MP through and ask her to sign it).

There is no reason to believe that MMR causes autism. The anti-vaccine campaigners will continue to mislead, stifle, or even smear.

But it’s important to keep your head, and not be polarised by the other side’s foolishness: because there are plenty of genuine problems in vaccine research, even if the campaigners have focused on a bad – and perhaps simplistic – example.

The British Medical Journal this week publishes a complex study which is quietly one of the most subversive pieces of research ever printed. It analyses every study ever done on the influenza vaccine – although it’s reasonable to assume that its results might hold for other subject areas – looking at whether funding source affected the quality of a study, the accuracy of its summary, and the eminence of the journal in which it was published.

Now, in my utopian universe, it wouldn’t matter where a piece of research was published, or how bad its summary was, because everybody would read everything with a joyful and attentive anality. Back in the real world, it has been estimated that each month, journals publish over 7,000 items – studies, letters, and editorials – relevant to GP care, as just one example. New research is imporant, but to read and critically appraise all this material would take physicians trained in epidemiology over 600 hours: there are only 720 hours in a 30 day month. So inevitably people will read summaries – the “take home message”, and only the bigger journals.

So what did our new study find? We already know that industry funded studies are more likely to give a positive result for the sponsors drug, and in this case too, government funded studies were less likely to have conclusions favouring the vaccines. We already know that poorer quality studies are more likely to produce positive results – for drugs, for homeopathy, for anything – and 70% of the studies they reviewed were of poor quality. And it has also already been shown, in various reviews, that industry funded studies are more likely to overstate their results in their conclusions.

But Tom Jefferson and colleagues looked, for the first time, at where studies are published. Academics measure the eminence of a journal, rightly or wrongly, by its “impact factor”: an indicator of how commonly, on average, research papers in that journal go on to be referred to, by other research papers elsewhere. The average journal impact factor for the 92 government funded studies was 3.74; for the 52 studies wholly or partly funded by industry, the average impact factor was 8.78. Studies funded by the pharmaceutical industry are massively more likely to get into the bigger, more respected journals.

That’s interesting: because there is no explanation for it. There was no difference in methodological rigour, or quality, between the government-funded research, and the industry-funded research. There was no difference in the size of the samples used in the studies. And there’s no difference in where people submit their articles: everybody wants to get into a big famous journal, and everybody tries their arm at it.

An unkind commentator, of course, might suggest one reason why industry trials are more successful with their submissions. Journals are businesses, run by huge international corporations, and they rely on advertising revenue from industry, but also on the phenomenal profits generated by selling glossy “reprints” of studies, and nicely presented translations, which drug reps around the world can then use. Anyone who thought this was an unkind suggestion might need to come up with an alternative explanation for the observed data.

This study is a fascinating example of the academic community turning in on itself, and using the tools of statistics and quantitative analysis to identify a nasty political and cultural problem, and give legs to a hunch. This could and should be done more, in all fields of human conduct.

But the greater tragedy is that the problem Jefferson and colleagues have revealed could easily be fixed. In an ideal world, all drugs research would be commercially separate from from manufacturing and retail, and all journals would be open and free. But until then, since academics are obliged to declare all significant drug company funding on all academic articles, it might not be too much to ask that once a year, since their decisions are so hugely influential, all editors and publishers should post all their sources of income, and all the money related to the running of their journal. Because at the moment, the funny thing is, we just don’t know how they work. Remember the Early Day Motion.

Please send your bad science to

If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

46 Responses

  1. DrRachie said,

    February 14, 2009 at 1:30 am

    I think you’ll find it’s a bit more complicated than that..

    You say that pharma advertise in big journals and this may influence the editors decision to publish? Whilst I can’t dispute that you seem to have skirted around the peer review process. I know that the editor is the first stop for manuscripts and makes the decision whether they are sent for peer review, but after that stage the fate is with the reviewers. I review lots of papers, – I don’t get paid by pharma. So what reason would we have to accept research on this basis?

    There are lots of factors that likely contribute to this publication bias. Reputation and publication record of the authors can perpetuate more publications in better journals. And perhaps having the stamp of a big pharma on the manuscript, as opposed to some obscure grant inadvertently lends credibility in the mind of the reviewer?

    I don’t disagree with what you have said but there are likley much more factors than conflict of interest of the editors/journals.

    I’m glad this has been brought to our attention. Might make me stop and thin when reviewing my next paper.

    Happy valentines day!

  2. The Biologista said,

    February 14, 2009 at 1:58 am

    Well of course it’s a bit more complicated than that… the building of reputation you suggest may well have (part of) its roots in that editorial bias. If reviewers see certain groups more often, that familiarity alone will lend some credibility. Well, unless the studies are trash, but how often do Big Pharma (sigh) submit anything other than their best 20% of potential papers?

    Whatever way this is working, there’s clearly some, perhaps inadvertent, naughtiness afoot.

  3. The Biologista said,

    February 14, 2009 at 2:03 am

    Also, with less funding comes less of a capacity to be choosy about which studies give birth to a paper and which do not. How many run-of-the mill university or government funded labs can really afford to *not publish* their work? That luxury is rather exclusively held by paperfactory labs and industry.

  4. alandove said,

    February 14, 2009 at 2:08 am

    It is an interesting study, and I’m glad to see it’s getting some wider coverage. However, I find it hard to believe that reprint charges and ad revenue are the major drivers of this phenomenon. Most of the journal editors I’ve worked with barely comprehend the money side of the operation, and the peer reviewers (who have the real clout) are even less aware of the bottom line.

    Instead, I would point to the little-appreciated issue of writing style: no matter what the quality of the science is, opaque academic prose will always make it less appealing to reviewers, and lucid writing will always make it sexier.

    Industry-funded studies will generally be written better than government-funded studies, because industry sponsors routinely hire professional ghostwriters. When the outstanding work written up by a graduate student hits the inbox next to the pretty good work written up by a full-time professional word jockey, the latter gets the edge.

    That’s just a theory, of course, but I’d bet a beer on it being at least partly right.

  5. henrywilton said,

    February 14, 2009 at 2:16 am

    In an ideal world, all drugs research would be commercially separate from manufacturing and retail, and all journals would be open and free.

    In my field, mathematics, it’s unclear what the for-profit journals provide. Most papers are written, typeset, edited and refereed by academics, all at no cost to the publishers, who then repackage the research and sell it back to university libraries at exorbitant prices.

    But mathematicians are fighting back. Elsevier’s Topology is now defunct, and has been superceded by the likes of the not-for-profit Geometry & Topology. Many mathematicians make their preprints freely available at

    I’ve heard it said that mathematicians can run their own journals because we’re all nerds. Well, more precisely, because equations look rubbish in Microsoft Word, so all mathematicians learn to use LaTeX. And then you can do your own typesetting.

    Could this apply to other fields too? What do for-profit journals provide for medicine, say, other than typesetting?

  6. Ben Goldacre said,

    February 14, 2009 at 2:33 am

    i think what’s interesting, alongside the mischief issue, is the finding that the big journals do not publish better studies, and that summaries often misrepresent papers: so people who think they are keeping up by reading the big journals, and the summaries in abstracts, are mistaken.

  7. DrRachie said,

    February 14, 2009 at 3:15 am

    I agree with you Ben. This opens up a mind field. Not only can you not trust the abstract but sometimes the discussion as well (see Wakefield, 1998).

    You know better than most how interpretation bias in the discussion can misrepresent the results e.g., drug company sponsored study says our drug works better than placebo, independent study says no etc.

    As you say here, it is however impossible for someone to read all the new research and analyse it critically, including study design, methods, statistics, interpretation and summary. We simply don’t have time.

    Fortunately, there are some measures in place such as the Cochranes which go some way to eliminating bias. And of course peer review is by no means a perfect system. I frequently come across shoddy studies from respected researchers and in my opinion they are published because of who is the last name on the paper. They are criticised less because of who’s lab they came from, and not judged solely on their merits.

    Just my opinion.

  8. mikefitzgerald said,

    February 14, 2009 at 7:24 am

    One contributing factor is that big pharma companies are better at managing their relationship with editors.
    The challenge for individual scientists and smaller organizations is developing these skills

  9. ianw said,

    February 14, 2009 at 8:10 am

    DrRachie: I think you might over-estimate the reviewers’ influence on the process. For example, I recently reviewed a paper (outside the medical field) and absolutely panned it for its dire methodology, but the editor’s decision was ‘Well… but the other review was a little more positive so I think we’ll publish anyway’. On other occasions I’ve expressed a tiny doubt in an otherwise glowing review and the editor has refused the paper. Reviewers provide only the broadest of broad-band filters, in other words. Unless you flat-out copy-and-paste the word ‘Reject’ fifty times, they have a lot of leeway to interpret your comments (or perhaps I’m still stinging from seeing that recent paper accepted!).

  10. Chris Edwards said,

    February 14, 2009 at 8:56 am

    I’d guess that the last-name effect plays a bigger role than journals’ immediate commercial considerations – the ad-get features, such as careers, are all at the back of the major journals. It’s hard to believe, as alandove notes, that journal editors are that bothered about who is advertising.

    Industry funding tends to go to the heavy hitters who are going to be those with the best track record of getting papers into the big journals and treat PloS One etc as dumping grounds – I heard one speech recently where a professor talked about how the team ‘rescued’ one paper from that fate by reframing what they were trying to do and adding an extra experiment. Conversely, the heavy hitters will try to attract industry funding because that brings more money into their lab, which means more research assistants and kit and stuff to publish. And a virtuous, or vicious, circle forms.

    Government funding, being based largely on applications, seems to be more evenly distributed across academia and less focused on impact ratings. The researchers with comparatively little industry funding may simply be too young to have attracted it, not having published enough papers in high-impact journals or aren’t very good at it.

  11. buserian said,

    February 14, 2009 at 9:24 am

    Is there perhaps the opposite effect here? To what extent is the industry better at getting people to read and reference their papers? To what extent are industry studies more interesting (e.g. about a brand new drug vs. additional information about a known drug)? Both would cause higher citation results for the journals they appeared in…

  12. drunkenoaf said,

    February 14, 2009 at 11:02 am

    The reason pharma studies end up in “better” journals than, say, an NIH study, is because of three words: medical communications agencies.

    Also, – a fantastic tool for choosing best journal that’s likely to accept your paper – that’s expensive, and probably not available to your institution, Ben, as it ain’t Pharma that you’re co-authoring on, is it?

  13. AntibodyBoy said,

    February 14, 2009 at 11:17 am

    “We already know that industry funded studies are more likely to give a positive result for the sponsors drug”

    Forgive me if I’m being naive, but surely pharma only allow drugs to be used in human studies towards the end of the pipeline, after they’ve been designed to affect a sensible, well-characterised target, tweaked to make them more effective, tested in vitro & in animal models. i.e. what makes in into a trial that is to be published has a better than 50/50 chance of giving a positive result de facto, simply by virtue of the drug being the result of a design/pre-trial process, don’t they?
    Anything that ‘doesn’t work’ at an earlier stage would presumably be less likely to give a positve result later on, so doesnt get taken forward. Wouldn’t we expect that the result is more likely to be positve in any given trial & publication, even without naughtiness creeping in?

  14. Baxter said,

    February 14, 2009 at 11:43 am

    I haven’t read the target article yet so not sure if this was raised in it — but “high-profile” journals with high impact factors often make publication decisions based on novelty. Studies of new drugs with novel mechanisms are more likely to be funded by the company developing them than by NIH/funding councils, who will demand preliminary data before funding projects. Thus the drug companies may be first out of the gate with new agents, getting the higher journals, then once that work is in print, other labs get government funding and start replicating/filling in the blanks — but this is less novel, so gets published in a lower-impact journal even though methodology/quality etc is just as good. Could this be an additional factor?

  15. penglish said,

    February 14, 2009 at 11:48 am

    Wakefield featured in Times today – see and

    “Dr Wakefield claims no responsibility for the fact that one in four
    children still does not receive the recommended two doses of MMR, adding:
    “The reemergence of measles is not the consequence of a hypothesis. We did
    not cause a scare. We responded to parents’ legitimate concerns. They were
    uncertain about the vaccine. We responded to that, as we should have done,
    and did, in a professional and ethical manner. Not to have done so would
    have been negligent.” “

  16. DrRachie said,

    February 14, 2009 at 12:42 pm


    You make some good points. Perhaps I am naive to think that as a reviewer, I have any sway in the process. I confess, I thought the editor had to rely on the reviewers to a large extent since a) they don’t have time to review the papers and b) often it is not their area of expertise. My opinion is it varies with editors and journals.

    I personally have received reviews where two are glowing and the other pans the manuscript. If the paper is accepted with changes, be it minor or major, I still have to address all the concerns. I have never had a manuscript go through without rebutting or addressing all the comments, both good or bad.

    Your point is slightly different I confess, where you (outright?) rejected the paper yet it was still accepted. This seems unusual to me.

    As I said in my previous post, the review process is not perfect – far from it. One of the most obvious issues is the length of time it takes to occur, meaning interesting research is denied exposure. Alternatives to this system are often proposed but to date, a viable alternative remains elusive.

    NB: I am speaking from my experience of publishing and reviewing for biology/biochemical journals, whilst near the top of the field are still < 10 IF. My knowledge of how the big guns do their thing is limited.


  17. davebush said,

    February 14, 2009 at 4:09 pm

    “all this material would take physicians trained in epidemiology over 600 hours” – although this might be lot for a single doctor, it doesn’t sound like much of a workload for the NHS as a whole.

    Perhaps you / we ought to be pushing for NICE to perform this and publish a monthly summary.

  18. Ben Goldacre said,

    February 14, 2009 at 5:41 pm

    davebush: yes, yes, yes, yes, and yes. the new NHS Evidence website i think wld be the correct place for this.

  19. mikewhit said,

    February 14, 2009 at 6:33 pm

    NHS Evidence / NICE is all well and good, but there is still “postcode prescribing” of e.g. methylphenidate.

  20. William Levack said,

    February 15, 2009 at 7:45 am

    RE: the finding that the big journals do not publish better studies…

    NICE guidelines and Cochrane Reviews are great, but obviously focus on summaries of the literature on specific hypotheses or issues. Isn’t what we need something like ‘‘ but for academic literature on specific topics: some independent body picking out the best (read here, highest quality) literature of the week from the big AND little journals in particular areas of study for the ease of those of us who don’t have time to plough through all the tripe?

    … how’s my mixed metaphor coming on?

  21. Flamborough said,

    February 15, 2009 at 10:40 am

    Perhaps the issue may also be with a difference in publication strategy between Pharma and individual academic researchers, particularly postdocs and PhD students. The former seem more likely to have a well-established format for this, in which they have plenty of experience, involving submission of many papers to the biggest journals in an attempt for publicity of results. The latter may have less experience in publication, and a more tentative attitude, submitting papers to lower-ranked journals when they actually would stand a chance of publication in those with a higher impact-factor. There are some courses on scientific paper writing at Universities for PhD students, but working out which journal to submit to, when to publish results, how exactly to structure your paper, in my experience, is more a matter of experience, confidence and good supervision, none of which come easily to everyone. Perhaps these effects result in publication bias?

  22. SteveGJ said,

    February 15, 2009 at 11:15 am

    Of course it is important to ensure that commercial entities do ultimately operate in the collective interest of humanity, and studies such as this are important. However, quite apart from this particular issue of access to prestigious journals, this blog refers to the wider area of development of drugs and therapies.

    It is said that the road to hell is paved with good intentions. The following phrase, “In an ideal world, all drugs research would be commercially separate from from manufacturing and retail, and all journals would be open and free” prompts me to ask in whose ideal world? I say ideal, because that implies a complete and utter change to basic economic systems. As followers of Marxist economic theory found, “it ain’t so simple”. As a high-sounding ideal, this is inarguable, but as a practical policy, who knows what the consequences may be, both economic and therapeutic. The economic consequences matter to health too. It is surely no coincidence, that by and large, improvements in human lifespan and health have followed from economic development (at least up to a point).

    I do have a concern that the same evidence based approach to the effectiveness of science is not being carried through in the blog insofar as it covers matters of great socio-economic importance, such as the drug industry. If there is to be an alternative to a largely commercially driven drugs development market, then we’d need to know what it is. We need to know where such a model has been tried. It might just be that the profit-generating model of commercial drugs companies is, flawed as it is, the most effective way of finding new therapies – at least in part, quite apart from other economic factors.

    None of this is to suggest that the drugs industry doesn’t need regulation, that abuses don’t need to be identified and rooted out, and that ways have to be found to finance research that isn’t economically viable yet of considerable human value. We also have to find ways of making therapies available to the poorer people of the world.

    However, if the almost palpable sense that I get from this blog that the profit motive has no ethical place in drug development is a true reading, then it’s appropriate to ask what would replace it, and what is the evidence that an alternative system would work. Health care does not reside in some form of bubble outside of economic forces.

  23. simonk said,

    February 15, 2009 at 6:49 pm

    I don’t understand why peer review isn’t done anonymously, with the author(s) of the paper removed before review. I would also like the methodology to be analysed independently (by a separate body) of the results, summaries and abstracts. I would also go along with your very sensible idea of pre-registering trials. Maybe every journal at the end of a paper should be required to provide the details of a Cochrane review if a relevant one exists. This may help reduce the impact of one rogue study.
    This one is not easy to solve.

  24. matthewthomas said,

    February 15, 2009 at 6:51 pm

    I agree that the difference in publication strategy might be a factor. I am a post-doc in an academic research lab, and this week spent an hour explaining to the PhD students what impact factors were. There seems to be little in my universities PhD training scheme that describes the mechanics of publication other than the bitter tales of those of us who have been there and tried to do that.

    Might another factor be involved? Pharma are in the drugs market, by definition. Academic, grant-funded researchers are in many different markets, one of which is drug research, but we also are involved in much more basic research. What is the distribution of impact factors in basic/mechanism-based research between Pharma and government research? I suspect that academic/government/charity research publishes basic research in higher impact journals that Pharma does.

    This ‘flu paper is a fascinating study, but it would be a mistake to extend it to all academic research, or to implicitly ignore research that is immediately relevant in the clinic.


  25. Ennui said,

    February 15, 2009 at 8:54 pm

    Great article. Some of the other comments are very interesting.

    Another reader pointed out that academia doesn’t always concern itself with the same kind of things as the pharmaceutical companies might.

    Academia is much more likely to concern itself with the “basics” of the biochemistry before attempting to speculate on medical interventions.

    By talking to my supervisors in academia, a lot of them seem to resent all the funding & attention which goes into medical sciences before more about the biochemistry is known.

    As a geneticist I know said said, it was like “trying to fix a car without knowing how the engine works.”

    I agree to a certain extent, but realise that medical treatments are sometimes urgently needed and obviously the large funding is attempting to address this.

    So – I guess that maybe the pharmaceutical companies’ research addresses the “hot topic” ultra-medical questions. Whereas if some old professor writes a very modest (though prolific) article on gene regulation which also discusses vacciation potential, it might be overlooked by a big journal.

    Anyone have any thoughts on this?

  26. dadge said,

    February 16, 2009 at 12:51 pm


  27. mikewhit said,

    February 16, 2009 at 3:33 pm

    dadge – is that in vivo or in vitro ?

  28. Craig said,

    February 17, 2009 at 2:31 am

    Flamborough: that’s what came to mind for me as well.

    While I’d love to get a paper into Nature/The Lancet/etc., I’m only “allowed” to submit any given article to one journal at a time (part of the standard submission boilerplate is a commitment not to submit that article to any other journal until the first journal has made a decision on whether they’ll accept it).

    I’m not going to stall a potential publication for six months just on the zillion-to-one longshot of getting it into one of the superjournals unless I’ve found something so spectacularly groundbreaking as to even up the odds.

    Simonk: peer review is done anonymously, at least from the point of view of the reviewers.

  29. jkling said,

    February 17, 2009 at 2:44 am

    I think there’s a very simple explanation for the ‘impact factor.’

    In general, pharmaceutical companies seek out ‘thought leaders’ and leading researchers in the fields related to their products. They do it simply to lend credibility to their claims and research and (no doubt) in hopes of gaining some influence.

    Given that they’re working with leading scientists, it should come as no surprise that the resulting publications have a higher impact factor than the average paper.

  30. jonathon tomlinson said,

    February 17, 2009 at 11:43 am

    Reed Elsevier and the international arms trade.

    In 2005 the Lancet published a letter criticising links between the publisher and the international arms trade . There was a brief defensive response from the publisher, but the Lancet refused to publish a critical letter i sent based on my experience of the arms trade in Afghanistan.

    The issue of a high profile medical journal continuing to publish articles about global health whilst its publisher made money from the arms trade continued to burn until 2007 when it was forced to sever its links

    Coorporations exist primarily to make profits. They have multiple interests and we need to be aware of what they are so we can take them to account. Just as authors need to list their interests, so should journals and their publishers.

  31. tom-p said,

    February 17, 2009 at 1:20 pm

    mike whit – surely it’s in silico?

  32. NuclearChicken said,

    February 17, 2009 at 1:49 pm

    I have to say that I disagree with those that say journals just repackage their research. I work for a major medical journal, and we work bloody hard to turn what is often impenetrable jargon into something that actually makes sense. This goes far beyond copy editing. Often we virtually rewrite the whole paper, and this is after the author has been asked to make all manner of amendments to cover the gaps and incomplete thinking.

    When it comes to who funds the research, I barely even know and I definitely don’t care.

  33. William Levack said,

    February 17, 2009 at 10:52 pm

    I’ve wondered how blind ‘blind’ peer reviewing actually is at times. When peer reviewing, often my guess is that the author is highly likely to be the person cited five plus times in Introduction section describing the theoretical unpinnings of the work and/or methodology. 😉


  34. clobbered said,

    February 17, 2009 at 11:04 pm

    Why do the industry papers get published in prestigious journals and the government papers not? I can think of two reasons:

    1. Page charges. Not sure what the situation is in the medical field, but in Physics the big journals have page charges (you have to pay to be published, considerably more if you want colour). Government labs frequently publish in less glamorous journals that have lower or no page charges.

    2. Emphasis. If an industry-funded academic does something, it’s “medical research”. If a government bod does it, it’s “public health”.

    If either of those is true, to come to NuclearChicken’s defense, then the root cause is self-censorship by the authors, rather than a nefarious plot on the part of the journals.

  35. NuclearChicken said,

    February 18, 2009 at 12:39 pm

    clobbered: I’m not sure what the case is for other medical journals, but we don’t charge the authors anything (even for colour 🙂 ).

  36. memotypic said,

    February 18, 2009 at 1:51 pm

    Firstly, I think many lower-rank academics are genuinely afeared of even minor league journals and their editors. Lots of times I’ve tried to help colleagues through reviews and editorial comments that seemed (to them) to be a death knell, when in fact all that was required was a careful and thoughtful rebuttal. Publishing, and science communication more generally, are education issues and should be covered properly at universities rather than being an art picked up by apprenticeship if you’re lucky enough to come through one of the ‘paper factories’ mentioned above (as I did, twice, thank god). Not a major factor wrt this issue I suppose (lots of academics are very good at publishing), but it is true more than it should be. Editors are just people too 🙂

    Why I mention it is that, as several have said, an agency, or an experienced part of a company, that is good at getting through the hoops of publishing, will normally outperform an academic. Academics are like the grand version of a DIY nut — they have to do so much on their own (relatively speaking). Writing, graphic design, promotional work; none of these things = science, yet all are crucial to the way we do it.

    I do agree that the pressure on academics makes them into short-termists when it comes to publishing. I really sympathise with the almost Prisoner’s Dilemma-like situation of going for a small certainty over a big gamble (and given how massively oversubsrcribed most journals are in this age of the ‘smallest publishable unit’, it is an ever-bigger gamble). Thanks to paper-based assessment like RAE (which sucks, and no it isn’t a ‘Churchillian raft’, it just sucks period because it undermines itself completely by its very nature — it’s the bad version of the Hawthorne effect if you’re a sociologist or similar). Whereas, as has been said, industry almost doesn’t need to publish (why would they, apart from to beef up their marketing material, or to an extent, to tickle academic collaborators — certainly not to get funding). I kind of get the impression that (apart from the marketing point) it is often done simply to press the happy buttons of in-house scientists, like a civil honour or something, and so that they can go to conferences (part intelligence gathering, part perk). And so their CV doesn’t get buried under dust, in case they want to go back to the public domain (or so that they at least feel that the door isn’t closed, avoiding involuntary career panic). This less-pressured approach does allow them to go for ‘big’ papers in a way that the modern academic cannot afford. Back to the smallest pulishable unit. And of course small (rapid) bites = small (easy) journals. Publication fees can also have an effect, but the best groups are far from skint tbh, and anyway _no_ department with two brain cells to rub together isn’t going to find the funds to cover the charges for a high-impact paper that will play well its next departmental assessment…

    There’s a slightly seedier point about academic involvement in commercial research, which I’m sure is almost always perfectly legit, but as with anything covered by a confidentiality agreement, may be a cause for concern. Certainly there have been cases where all did not work out well (including a US Grand Jury in one unpleasant case, for example). And having a bigwig on a paper stuffed with company peeps doesn’t hurt, especially if they’re a paper factory proprietor, for all the reasons to do with familiarity/visibility rehearsed in prior posts.

    Anyway, a gear change to what I thought I was going to post about: I recently attended a (ever-so-slightly-facetiously-imho-titled) Web 3.0 (no less lol — shouldn’t we be worrying about 2.0 still..?) discussion that was hosted at the BL by NPG. All great stuff, fantastic meeting, very pleasing light:heat ratio. The take-home was that scientists should be doing more to promote their work — building their own e-networks (blogging [god do I hate that word lol — almost as cringey as “the ‘brane” eugh] like mad, sharing more, collaborating more, learning more). All uncontroversially postitive. The whole ship sails smoother and faster, etc. This of course is slightly disingenuous as many academics guard the one tiny area in which they have a head start on the zillions of potential competitors looking for easy wins, because if they announce their ongoing research (open lab books, for example) then lots of them die. This is essentially an ecological argument: Replace a finely subdivided envirnoment with a big single homogenous open environment and you’ll lose species like an asteroid just hit. Same for this; the fencing-related inefficiency actually keeps academia going to an extent (think evolutionary refugia, local adaptation, allopatry and anything that can be raided from evobio). Otherwise all the little guys get killed by supergroups with the resource to go better and faster (for which they are of course rewarded with yet more resource, which I actually don’t have a big problem with as they are mostly super on account of being super, but science _will_ die without the rest of us plugging away too — pseudo-corporate science is no less pernicious [in its bad incarnation] than corporate anything else). Many scientists are not cute (many are fluffy, but not in a good way). But nonetheless, like various other kinds of ugly but crucial beasties they must be protected for the overall good of this wattle-and-daub system we have more or less blindly evolved.

    Still not made my original point. It is this: My issue at the NPG discussion was that, okay, I now have lots of journals and lots of scientists that I might want to keep track of. Fine. Lots of ways to get summaries, but that simply isn’t enough. Ignoring the most glaring issue of accuracy in the reporting of the reporting of the reporting of something at best half understood by the index reporter (even ‘science reporters’ are often completely clueless — I’ve had some awful experiences with some almost wilfully dumb practitioners of the art of not listening properly or doing any preparation at all). And then there’s the process of simply propagating nuggets without unpacking them, via ‘bookmark this’ links of various kinds, that leads to who-knows-what emergent groupthink (/clusterfcuk) outcomes.

    But ignoring all that, my issue is that there are already soooo many summaries. Most of the larger journals have a highlights section; there are review journals; there are review sites; there are informal lists within communities blah blah blah; and now there are bloggers. The issue is that I need someone to summarise the summaries of the summaries. As with integration of bioinformatics (or similar) tools and databases, the first proposal is usually to build a meta-thing to bring all the other things together (“But it wasn’t a dream. It was a place…”). But of course it won’t be _everyone’s_ meta-thing (US v Europe, group on group, blah blah) because we don’t live in a global dictatorship. So there is always a market for going ‘up’ a level to do the next meta-meta-meta thing. No end in sight. Same for summaries and blogging, except the channels are noisier and the information less pure to start with (because it’s bad enough when it is the real uncompressed data — it only gets worse when people start trying to summarise it — at best, some information loss, at worst…).

    So summaries, great. But provenance is a huge issue, as is a kind of chinese whispers mixed with all sorts of confounding personal biases, language problems, editorial choice, mixing of opinion with fact in a forum that obscures the difference. I don’t know what the answer is, but I’m pretty sure it isn’t blogging. Blogging is either informal ‘primary’ communication (i.e. an _input_ rather than an output/summary), or simply the worst kind of recycling, now with bonus bias. That’s fine, I like bias — it adds flavour. But I like to understand how to factor it out, and how to tell unsugared fact and obvious fiction from ‘faction’/infotainment. This age of dumbing and sensationalising to appeal more broadly, rather than looking for interesting angles, has its effect on blogs too — what makes people read a blog — are there really any ‘blogs of record’ (I’m guessing no). Surely, in the same way that we’re all marketeers on eBay, we’re now all journalists on blogs? (Except we’re neither, but our amateur attempts to ape those professions has its complex and variable effect.)

    An analogy: Google rank isn’t the only measure of a web page’s subjective worth if you see what I mean — i.e., a popular blog is not necessarily an accurate one (and how could we tell anyway when it is a given that we don’t have time to read more widely — this is like trusting that there really is an army of compulsive obsessives ensuring the accuracy of Walesypedia). I’d like to see more effort put into professional, publicly-funded news services (someone mentioned something like this above), built in part on blogs to get the zeitgeist, but also using other sources directly. Sadly, who does the work raises the same old issue. The NHS could definitely fill a need, as could a service sponsored by RCUK, for example. Or the Beeb (though they often struggle with science, resorting to [unconsciously] misrepresenting something so as to make it simple enough for them to understand enough to try to explain for no clear reason except when it is to scare people shitless). We can’t wish away all these diluting journal-shaped career lifeboats, so we need to do something, but I’m not a fan of community solutions when, as here, accuracy is so crucial (if the source is polluted, we’re almost looking at a Muller’s ratchet scenario where eventually the ‘good’ copy may be lost for ever and all descend from the mutant copy — that’s the undeletable internet — and [dare I say] memes — for you).

    That said (and straying off-topic once again), I’m wholly in favour of our floppy-haired Manc-tastic Dr Cox, who is Attenborough-like in his ability to tack close to the wind while keeping the facts in view. More like him pah-leez. Then science might be seen as closer to general interest, and one can maybe dream that being a bit more mainstream means better-resourced news coverage (rather than cheap headlines and distortion to allow science news to cross over into general interest). It may also mean less scaremongering nonsense like the MMR thing can pass (necrotising fasciitis anyone? irradiated food? frankenstein food?) because pop news editors might start to give a shit if they are regularly laughed at for being feckless lightweights failing the public on a daily basis. Kay I’m dreaming now. Byee.

  37. Joe Dunckley said,

    February 18, 2009 at 2:06 pm

    in addition to the above suggestions:

    1. government-funded researchers have more incentive / less inhibition to publish their “negative results”, against which there is much systemic bias, and which therefore usually end up on low impact journals. This will reduce an “average” score.

    2. some people have argued (excuse my weasel words, i’m too lazy to google for a reference) that industry papers tend to overstate conclusions more than the average paper. this might get caught and corrected by the reviewers in the final version of the paper, but the hype in the early version might have been enough to persuade an editor on a high-impact journal that the paper is worth considering in the first place.

    3. industry can afford to hire professional science/medical writers to assist in the construction of an attractive looking paper. govt-funded academics and doctors only have their overworked students to dump such chores on.

    4. academic editors sit on editorial boards: they might be quite happy to submit to a lower impact journal if they feel some connection to it, such as being on the board.

    i’m sure there are still more reasons than this.

  38. HousePhD said,

    February 19, 2009 at 10:21 am

    I agree with Joe Dunckley that positive vs negative result publication bias is one factor. As for his point 3, however, I’ve seen technical editors in university departments as well.

    There is another factor at work as well, however, when it comes to the “big labs”, the opinion leaders that industry works with. These labs are often far beyond “publish or perish” because they have a high output with high impact anyway. In fact, a lot of minor stuff will likely lag behind and, if at all, be published ages after it has been done, because the priority just isn’t there and there’s more important results. In cases, however, where positive results from industry products are involved, there will be, albeit subtle, pressure from the industry to get going and publish these results since any day the results stay unpublished is a lost day for the company. The good news is only good news if it is heard by others. If it’s not good news, there will be no pressure to publish it and it will stay at the bottom of the big pile on the PI’s desk while more important stuff gets stacked on top of it.

    However, as someone from the diagnostics industry, I’d like to point out that the issue of bias within a publication is much more complex. When we had a lab evaluate a third party device the manufacturers of which wanted us to destribute their device, we really had to watch out that the technicians actually reported negative results and didn’t attribute them to “Oh well, I screwed up pipetting that one…” The bias is all too often a conscious or subconscious self-censure on the front end based on conceptions what one thinks the sponsor wants to hear rather than pressure from the back by industry. In the specific case, we wanted to know if the machine performs as claimed before we attach our good name with it and were very thankful to know it didn’t before we ruined our reputation for high quality devices by selling people a shoddy analyzer.

  39. LizJ said,

    February 19, 2009 at 2:12 pm

    Ben, the first two sentences of this article are addressed at your blog audience, not a general newspaper audience. Anyone unfamiliar with your writing would skim them and move on to something else, and that is unfortunate. Before I read your book, and subsequently started following your blog, I was one of those people who regularly skimmed the start of your column and moved on, because I found your writing style hard to follow and it generally felt like I was arriving midway through the conversation. If you care about widening your audience I’d suggest you proof-read your Guardian articles with a “new reader” hat on, or ask somebody else to do it for you. What you have to say is too important to risk losing a single potential reader.

  40. squidfood said,

    February 19, 2009 at 11:17 pm

    For the Bad Science Blog, I’m surprised at the jump from correlation to causation in your article. A simple model would produce the same result without conspiracy:

    1. Journals, like all media, are self-selecting for “exciting” results. People, with limited time, tend first to read journals with “new positive” results, except perhaps within the extremely narrow niche of their own expertise, thus journals emphasizing the positive become the most broadly popular.

    2. For reasons given in previous comments, given limited hours in the day, an academic’s is biased towards producing largest number of publications, positive and negative, while a private company researcher is biased towards positive results (biases based on career incentives).

    Net result from the simple model is the one that’s observed, a match in goals and therefore articles between popular journals and private companies – no conspiracy necessary! Just a product of how we filter any information through media sources given limited time.

  41. Joe Dunckley said,

    February 20, 2009 at 1:31 pm

    oh, and I neglected this one, from Jan Velterop:

    it’s not that industry pushes its way into higher impact journals, it’s that industry push their published papers around, giving them higher visibility, and thus attracting more citations to them.

    (I’m not sure I believe it explains the findings of the BMJ paper — impact factors have a four year lag, and I don’t think single articles are going to skew an IF from 3 to 8, but it’s yet another factor to consider…)

  42. DHR said,

    February 21, 2009 at 12:00 am

    Many of the editors are senior members of the medical profession who get bungs and career advice from the pharmaceutical companies.
    You find the same names tend to crop up again and again. They not only push the research papers that give a positive spin on the drug involved, they suppress studies that may have a counterpoint.

  43. clydicus said,

    March 6, 2009 at 5:54 pm

    > Studies funded by the pharmaceutical industry
    > are massively more likely to get into the bigger,
    > more respected journals.

    I have an idea about this and I am curious to hear opinions. The “biggest” journals (NEJM, Lancet, Nature, JAMA) tend to be general medicine journals. These journals will, for example, often turn down a truly spectacular neurology article because it is too narrow a topic, too specific a specialty – the article belongs in a neurology journal, not a general medicine journal.

    General medicine journals are looking for articles that are relevant to common conditions that effect very significant numbers of patients.

    It seems to me that pharma companies are looking for the same thing, if for different reasons.

    Could this explain the correlation to some extent?

  44. calvin said,

    April 6, 2009 at 3:16 pm

    My understanding is that the association between autism and MMR was made by parents of children who had been bright, engaging and developmentally normal, but who came to suffer from a catastrophic reversal of development after the administration of the MMR. The medical establishment are refuting this SPECIFIC association on the basis of a GENERAL study of autism.

    There probably are many parents of autistic children who blame MMR, whose children almost certainly were not affected by the vaccine. The fact remains that I can find no study that specifically investigates the link between regressive autism, the type of autism under suspicion, and MMR. For this reason I feel that the various studies all have a whiff of the “strawman” about them.

    The other point that troubles me is that the focus of the government and medical establishment seems to be on refuting the link between MMR and autism and not on properly investigating the cause of the significant and troubling rise cases of autism.

    The “diagnostic” explanation of the establishment seems to be fairly unsubstantiated, it’s almost as though these authoritative bodies feel that they merely have to suck a speculative alternative explanation out of their thumbs. Why is the diagnostic theory not subjected to the same exhaustive and repeated critique as the MMR theory? Sounds like academic bias to me.

    With regard to the Danish study, it doesn’t seem to me that the diagnostic criteria changed very much within the timescale of the introduction of the MMR and the mid-nineties rise in cases of autism. How does improved diagnostics explain the rise in autism when no significant change in diagnostics has been shown to have actually been made?

  45. wokao123 said,

    October 15, 2009 at 8:49 am

    i like this article Links of London Links of London Links of London Links of London Tiffany Tiffany Tiffany Tiffany ED hardy ED hardy ED hardy UGG BOOTS UGG BOOTS UGG BOOTS UGG BOOTS

  46. wayscj said,

    November 21, 2009 at 6:18 am

    ed hardy ed hardy
    ed hardy clothing ed hardy clothing
    ed hardy shop ed hardy shop
    christian audigier christian audigier
    ed hardy cheap ed hardy cheap
    ed hardy outlet ed hardy outlet
    ed hardy sale ed hardy sale
    ed hardy store ed hardy store
    ed hardy mens ed hardy mens
    ed hardy womens ed hardy womens
    ed hardy kids ed hardy kids ed hardy kids