Ben Goldacre
Saturday June 10, 2006
The Guardian
When I am finally assassinated by an axe-wielding electrosensitive homeopathic anti-vaccine campaigner – and that day surely cannot be far off now – I should like to be remembered, primarily, for my childishness and immaturity. Occasionally, however, I like to write about serious issues. And I don’t just mean the increase in mumps cases from 94 people in 1996 to 43,322 in 2005. No.
One thing we cover regularly in Bad Science is the way that only certain stories get media coverage. Scares about mercury fillings get double page spreads and Panorama documentaries; the follow-up research, suggesting they are safe, is ignored. Unpublished research on the dangers of MMR gets multiple headlines; published research suggesting it is safe somehow gets missed. This all seems quite normal to us now.
Strangely, the very same thing happens in the academic scientific literature, and you catch us right in the middle of doing almost nothing about it. Publication bias is the phenomenon where positive trials are more likely to get published than negative ones, and it can happen for a huge number of reasons, sinister and otherwise.
Major academic journals aren’t falling over themselves to publish studies about new drugs that don’t work. Likewise, researchers get round to writing up ground-breaking discoveries before diligently documenting the bland, negative findings, which sometimes sit forever in that third drawer down in the filing cabinet in the corridor that nobody uses any more. But it gets worse. If you do a trial for a drug company, they might – rarely – resort to the crude tactic of simply making you sit on negative results which they don’t like, and over the past few years there have been numerous systematic reviews showing that studies funded by the pharmaceutical industry are several times more likely to show favourable results than studies funded by independent sources. Most of this discrepancy will be down to cunning study design – asking the right questions for your drug – but some will be owing to Pinochet-style disappearings of unfavourable data.
And of course, just like in the mainstream media, profitable news can be puffed and inflated. Trials are often spread across many locations, so if the results are good, companies can publish different results, from different centres, at different times, in different journals. Suddenly there are lots of positive papers about their drug. Then, sometimes, results from different centres can be combined in different permutations, so the data from a single trial could get published in two different studies, twice over: more good news!
This kind of tomfoolery is hard to spot unless you are looking for it, and if you look hard you find more surprises. An elegant paper reviewing studies of the drug Ondansetron showed not just that patients were double and treble counted; more than that, when this double counting was removed from the data, the apparent efficacy of the drug went down. Apparently the patients who did better were more likely to be double counted. Interesting.
The first paper describing these shenanigans was in 1959. That’s 15 years before I was born. And there is a very simple and widely accepted solution: a compulsory international trials register. Give every trial a number, so that double counting is too embarrassingly obvious to bother with, so that trials can’t go missing in action, so that researchers can make sure they don’t needlessly duplicate, and much more. It’s not a wildly popular idea with drug companies.
Meanwhile the system is such a mess that almost nobody knows exactly what it is. The US has its own register, but only for US trials, and specifically not for clinical trials in the developing world (I leave you to imagine why companies might do their trials in Africa). The EU has a sort of register, but most people aren’t allowed to look at it, for some reason. The Medical Research Council has its own. Some companies have their own. Some research charities do too. The best register is probably Current Controlled Trials, and that’s a completely voluntary one set up by some academics a few years ago. I have a modest prize for the person with the longest list of different clinical trial registers.
And why is this news? Because people have been calling for a compulsory register for 20 years, and this month, after years of consulting, the World Health Organisation proudly announced a voluntary code, and a directory of other people’s directories of clinical trials. If it’s beyond the wit of humankind to make a compulsory register for all published trials, then we truly are lame.
· Please send your bad science to bad.science@guardian.co.uk
Sterling TD. Publication decisions and their possible effects on inferences drawn from tests of significance — or vice versa. J Am Stat Assoc 1959;54:30-4
Tramer MR, Reynolds DJM, Moore RA, McQuay HJ (1997) Impact of covert duplicate publication on meta-analysis: A case study. BMJ 315: 635–640
BobP said,
June 10, 2006 at 10:21 am
Ben, a huge crystal energy generator has been aimed at your heart for several weeks now but you didn’t pick up the vibes.
Ben Goldacre said,
June 10, 2006 at 10:23 am
That’s strange, something is penetrating my unconscious, suffusing me with a rich golden light, telling me to write more silly articles.
Ben Goldacre said,
June 10, 2006 at 10:36 am
Sadly, it hasn’t.
www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15877763&query_hl=3&itool=pubmed_docsum
profnick said,
June 10, 2006 at 10:43 am
Ben,
This is a very interesting point which I guess I hadn’t really thought about before. In 25 years of research I accumulated hundreds of files with experimental data in. The vast majority were negative results and, given that they weren’t the results of drug trials, it’s not surprising to me that no journal would consider publishing my “failed” experiments, so it never crossed my mind to even try. Thus all my published papers were “interesting” (at least to someone one hopes) results which disproved a null hypothesis. So to get back to your point, in every research field there must be a vast proportion of unpublished to published data. Maybe quite a lot of interesting stuff is hidden in all that “noise”?
Ben Goldacre said,
June 10, 2006 at 10:46 am
profnick: i’m sure that’s true, and online low threshold publication routes are useful, although they still demand that the authors write up the methodology and put a fair bit of work in.
Bob O'H said,
June 10, 2006 at 10:46 am
Some of us are trying to do something about publication bias. There are now several journals dedicated to providing an outliet for negative results:
i) Journal of Negative Results in Biomedicine (www.jnrbm.com/)
ii) Journal of Negative Results in speech and Audio Sciences (journal.speech.cs.cmu.edu/)
iii) Forum for Negative Results [computer science] (page.mi.fu-berlin.de/~prechelt/fnr/)
iv) Journal of Negative Observations in Genetic Oncology (www.path.jhu.edu/NOGO/)
v) Journal of Articles in Support of the Null Hypothesis (www.jasnh.com/)
vi) The Journal of Spurious Correlations (www.jspurc.org/main2.htm)
vii) Journal of Negative Results – EEB (www.jnr-eeb.org)
(Statement of conflicting interests: I’m involved with no. vii)
I think from the academic side, things will only change if the value of publishing negative results is raised: an obvious way to do this is to submit to these journals, and help them raise their profiles.
This won’t solve the problem of companies not publishing, of course, but a change in atmosphere towards negative results might help.
Bob
BobP said,
June 10, 2006 at 11:21 am
Ben –
The Department of National Statistics disagrees with your figure of 96 cases in 1996.
See –
www.statistics.gov.uk/STATBASE/Expodata/Spreadsheets/D3983.xls
I guess you are referring to the Health Protection Agency database here?
www.hpa.org.uk/infections/topics_az/measles/data_mmr_confirmed.htm
Ben Goldacre said,
June 10, 2006 at 11:40 am
Bob: no.
The top of your links is for doctors’ notifications where they suspect someone might have mumps, that’s a completely different bit of data.
The bottom link is for confirmed cases of mumps, and that is the appropriate figure to quote as “mumps cases”.
Robert Carnegie said,
June 10, 2006 at 11:52 am
On trials, simply registering trials may not be sufficient. Let’s assume for the sake of argument that all drugs companies will do their level best to make their trial results look favourable, and running multiple trials on the same drug is both an effective way to do this, and, in fact, necessary good practice. Well, certainly there are well conducted trials and poorly conducted trials; that’s inevitable, too, apparently. So if a trial is not producing the result that the paymasters want, then they can probably arrange to shift it into the “poorly conducted” category, introduce obvious mistakes, get the guy running the experiment hooked on drink or drugs…
On the other hand… obviously if it’s possible to estimate beforehand whether a trial will produce a favourable result or not, then drugs companies will prefer to fund the favourable ones, and that isn’t particularly sinister, only commercial – they are in the business of taking our money, principally, it’s what they get up in the morning for. Keeping us alive is incidental.
Anyway… given that sponsored trials evidently are only part of the problem, how to solve the rest of it? Well, every university scientist wants to be published, whether the results are positive or negative. If you are published then you get tangible benefits. I worked as computer technician in the Faculty of Law at Glasgow University and they asked me if I could see my way to publishing anything.
So it’s the journals that you need to work on – to establish a protocol so that they review and publish the less interesting work. Perhaps when a journal publishes something exciting, they should always take on responsibility for publishing the less exciting followups? That possibly would just create another bias, but maybe a less harmful one?
Bob O'H said,
June 10, 2006 at 11:59 am
John A:
The problem with self-publication is that it is not permenent, and is also not peer reviewed. In other words, you could put any old rubbish up. Whilst peer review is far from perfect, it does provide some quality control.
On the free release of data, as a statistician I think that’s a great idea in principle: it means I could use other peoples’ data to find out some really boring stuff. There are going to be problems with people being possessive: I’ve been looking at data sets that have taken a couple of decades to collect, and I appreciate that it would be difficult for my collaborators to put these up on the web, and see them used to do what you’ve been wanting to do, but have never had the time. It’s difficulot, and I’m not sure what the solution is. Perhaps if enough people put their data up on the web, it’ll become the norm.
In summary: bloody social realities gets in the way again.
Bob
Ben Goldacre said,
June 10, 2006 at 12:00 pm
Other useful interventions would be to have negative result publications:
# listed on CV’s,
# considered in academic job applications, and
# counted, perhaps specially and separately, in RAE’s.
John A said,
June 10, 2006 at 4:41 pm
Bob: In other words, you could put any old rubbish up.
True. But speaking specifically about negative results, I’m not sure how much of a problem a lack of peer review is.
More generally, one way around the permanency/peer review is to have a central site where all researchers submit stuff and to allow comments to be sent. This would allow researchers to discuss issues about the research in the same place it is published. Some sort of Wiki set up may be useful. In my mind this would be a good place for poeple to make points reviewers may have missed – I’m sure we’ve all read papers with glaring holes reviewers have missed…
Delster said,
June 10, 2006 at 5:18 pm
one additional advantage to publishing negative results would be a reduction in time spent re-doing that which has already been done. After all if somebody has an idea for a research project it would be nice to be able to find out if it’s already been tried.
Obviously if a new technique / drug / approach is devised then re trials could be worth it.
Teek said,
June 11, 2006 at 11:00 am
Delster: “one additional advantage to publishing negative results would be a reduction in time spent re-doing that which has already been done. After all if somebody has an idea for a research project it would be nice to be able to find out if it’s already been tried.”
very, very good point. from my own (perhaps limited) experience ‘negative results’ often arise because of some biological reason, i.e. compound/protein/molecule X doesn’t have a chance in hell of treating condition Y because substance Z stops it from doing so, something that maybe laboratory A has already found out but lab B spends 10k and six months finding out for itself. if lab A publishes it’s findings, even tho it’s kinda negative (i.e. X does NOT treat Y), B wouldn’t have gone to the effort.
this brings us onto whether there are two types of negative results – Type I, where for instance a drug/treatment/process has no therapeutic effect where it was expected to do so (like my convoluted example,above ), and Type II, where instead of/as well as a therapeutic effect, the drug/treatment/whatever produces negative consequences and/or deleterious side effects. currently journals do publish Type II negative results, because they are useful in warning others of unwanted consequences to potential drugs etc.
Type I needs more attention as most of us here seem to be saying, perhaps in the kind of journals that specifically set out to pubilsh negative (or rather ‘not positive) results – like those in Bob O’H’s list)
b said,
June 11, 2006 at 4:59 pm
One a positive result is printed, null results about the same hypothesis are no longer uninteresting. At the least, they are sensitivity tests that indicate that minor changes in test design will reverse the results, and in some cases they can catch dumb one-in-a-hundred luck with the p-values.
In a healthy system, twenty people do a study, and only one of them finds that the variable in question is significant with p>0.05. That one publishes, then the other nineteen publish articles building upon the pioneering work of the guy who got lucky, then a metastudy is written finding that the preponderance of evidence indicates against rejecting the null. Since you’re no stranger to the metastudies, I imagine you could give a few examples of a story like this yourself.
It’s a slow and potentially inefficient system, and the positive result always gets to go first, and people who don’t know how to work the “show works that cited this article” button on their lit database think that the positive result is unrefuted, but in the long run it seems to generally work OK.
FlammableFlower said,
June 11, 2006 at 9:20 pm
Negative results would be bloody helpful to researchers. As a synthetic organic chemist you spend a lot of time trying to make X from Y, and even though it looks easy on paper, it goes and sticks two figners up at you. Occasionally you see an angry footnote or offhand comment in an article. To see a paper on the lines of “we tried this, it didn’t work so if you want to do the same, think of another way..” is very rare. I did like one blokes take on this problem:
jun.lemonie.net/JUnChem2001.html
Articles such as “Great Results: How to Achieve Them With Statistics.”
I even managed to get something published….
stephenh said,
June 12, 2006 at 8:36 am
There’s an interesting article on this topic at:
Public Library of Science
Bookfeller said,
June 12, 2006 at 1:00 pm
Please don’t deride groups for failing to get their act together. To say that if we can’t set up a single register of clinical trials then “we truly are lame”, is to imply that social and political problems are “easy-peasy”, so failure in these fields should be a matter for shame.
But in the real world, politics is not easy. It is actually much more difficult than rocket science. Scientists often make the mistake of thinking that because science is technically difficult, all other problems are relatively easy. With a doctorate in physics I used to make that mistake myself. It made me feel ever so clever to assume that people who couldn’t solve political problems were stupid jerks. Then I started trying to work on such problems and found they really aren’t at all easy.
I enjoy the “bad science” column, and I think some corrective to the sloppy use of science in advertising and journalism is very much needed. However, I have a sneaking worry that the impact on non-scientists may be negative. It probably makes them feel they truly are lame, and that’s not a nice feeling.
Can we support people who want to think straight without making them feel lame?
Agema said,
June 12, 2006 at 1:17 pm
I’d expect a homeopathist would need an axe. It’s not like they’d be much use as a poisoner with their dilutions.
The current system of science needs publications, and publications in good papers, to justify research grants, with no small pressure involved. Nor is it a trivial job to write a paper. To produce one that had minimal impact in the Journal Of Near-Irrelevance is not a good use of the effort that paper writing can involve. There’s also a certain amount of ego: no one likes to admit they went through 20 different things to find something which didn’t work. It’s close to saying: “Sorry, I’m s**t, and here’s a record of my failure.”
It’s not like the empirical truth is not emerging. It’s more like what you see is the refined truth rather than the crude mass of data. And I agree science would be better if negative results could be readily presented; it would probably save lots of waste, and give you far more to show that you have actually spent the last 3 years working. But under the current system, don’t hold your breath.
Bender Blog » Blog Archive » Quem distorce suas pesquisas médicas? said,
June 12, 2006 at 3:03 pm
[…] A história é repetitiva: "uma pesquisa sobre [qualquer coisa] revela que [substância produzida por um grande laboratório] pode curar [doença crônica séria]" e toda a imprensa internacional (insclusive o Fantástico) noticia. Semanas depois uma outra pesquisa desmente a primeira e a academia segue em frente, mas os pontos negativos nunca recebem o mesmo destaque. Então, de quem é a culpa, pergunta Ben Goldacre? Posted by Bender Filed in Mundo real, Comentários […]
Raging Potato said,
June 13, 2006 at 5:59 am
With regard to constructing online databases of negative experimental data, Cornell University has established a browseable, searchable database of real experimental data. Users can search, analyse, download and submit data from neurophysiological experiments. The idea is that somebody else might be able to use your data to examine a hypothesis, saving them the expense and time of actually doing the experiment themselves, and thus conforming to 2 of the three R’s of ethical experimentation with animals (reduction, refinement, replacement).
It’s still at the developmental stage, and I haven’t got it to work for me, but the principle is sound; often (particularly in neurophysiology) the difficult part is actually getting the data. There are many times in which it would be relatively simple to extract useful information from experiments that examined something else.
If you’re interested, it’s at neurodatabase.org (I read about it in Nature Neuroscience).
Pip pip
Delster said,
June 13, 2006 at 11:26 am
I think what would be better than a database of negative results would be a database of all results.
ie Say i have the idea of checking if controlled ragweed pollen extract exposure would actually reduce allergy symptoms. I can then go away and check the database using those as keywords.
If this is done i would find that someone has actually done this but using grass pollen as opposed to ragweed. (i know… i was one of the guinea pigs 🙂 )
I could then study their results to see how their study was designed and hence design mine to cover any gaps i might feel warrented attention or persue their results in another direction.
This way research would actually complement prior studies and build upon results rather than being another stand alone project.
Think how this would apply in the world of cancer research where you have lord knows how many individual charities supporting large numbers of studies into various cancer types. I’d be willing to bet that 50% of the money that actually gets into research is wasted on double efforts.
Raging Potato said,
June 14, 2006 at 3:29 am
Delster
There is a database of experimental results; it’s called pubmed.com
The kind of information you desire is contained within the normal scientific literature; you go to pubmed, type in your keywords (ragweed AND allergy) and you get links to the relevant publications. Release of the ‘raw data’ from this kind of study would not be terribly beneficial to the scientific community because the interpretation of such work is very straightforward; you expose the patient, measure the response (i.e. bronchial constriction/wheal and flare/subjective itchyness scores or whatever). There’s not much new data that can be gleened from such a database (as oppossed to reanalysis of old data).
On the other hand, more technical studies (i.e. recording the behaviour of parts of the brain in response to a stimulus or whatever) contain vast amounts of information, normally as a result of simultaneous recordings of various parameters (e.g. heart rate + blood pressure + nerve activity). The results from a particular study may focus on one subset of the data (i.e. what happens to nerve activity in the 30s after you do proceedure A), when such a subset only represents perhaps 10% of the whole dataset. In these instances, there may be other researchers interested in what the relationship between one parameter and another at basal levels (e.g. is nerve activity modulated by heart rate?). Public access to such results should be encouraged and could be greatly beneficial.
I’d be willing to bet that much less than 50% of the money that gets into research is wasted on double efforts; I know this because I’m a researcher and I’ve had to run the grant gauntlet. Research that tries to reproduce previous results is simply not funded. There are more important things to fund, and not enough money. The people that decide who the cash goes are experts in the field, recruited by the grant-funding bodies for said expertise. Only about 10-15% of applications are successful, and the people that write the successful grants are the people with the track record of high impact novel findings (with the odd exception). That’s not to say that nobody tries to replicate previous findings – existing models are frequently used to develop new techniques, and sometimes contentious findings need to be reviewed by other laboratories before they’re accepted as fact. This is definitely a minority however.
Delster said,
June 14, 2006 at 10:01 am
I know that pubmed contains this kind of information on trials. how ever it does not contain information on all trials conducted. Only ones that have been published, and probably not all of those.
What we’re talking about is combining both the positive result trials and the negative result trials into a database that would provide a better way of cross checking for previous investigations into what ever subject.
even if only the authors, paper or trial name and key words were entered into this database it would be an excellent research tool.
To give an example. Say i’ve done a trial, written it up etc but that the trial was a negative result. I may well chose not to publish to the general science community but just pass my results along to those paying or who had to be reported to. What i could then do in this instance is to enter the result into the hypothetical database.
If somebody then requested the results based upon their search i could direct them to the study results or send along the abstract etc.
As for your statement that the results of a study of this kind not being advantageous, i’d have to disagree.
The study in question was not simple exposure to pollen. What was done was an extract of the pollen was made and administered to the patients on a daily basis over a period of 18 months. This was done through a solution held under the tongue for 60 seconds then swallowed. unfortunatly i was only able to do 6 months of this trial due to sever injuries and i don;t know what the end results were.
Personal experience seems to indicate a good reduction in symptoms for around 4-5 years from my partial treatment (anecdotal i know). However this study would have produced data showing either a positive result (which may indicate a good delivery method to further researchers and encourage further trials with other pollen types) or a negative sugesting the opposite.
Either way, more knowledge = good, less = bad
Robert Carnegie said,
June 17, 2006 at 11:30 pm
Agema: I think the point not that research may be done badly and still should be published, nor that a scientist could lose face by publishing or releasing a series of scientifically valid but commercially unrewarding results. If you’re responsible for picking the experiments that you do, and all that you add to scientific knowledge is “That doesn’t work, and that doesn’t work”, it isn’t much. But you may be employed to carry out exactly those experiments. Perhaps that in itself is not a position of prestige…
ikhovablovrmeenkaku said,
July 6, 2006 at 12:53 am
3397
The very best portal for all your porn links is Porn Plaza
Questions so far « Ethics of Medical Journalism: Tom’s PhD diary said,
October 18, 2006 at 3:25 pm
[…] 10. Does freely available, peer-reviewed scientific literature – which should in theory remove concerns about conflicts of interest on the part of the writers – confuse journalists more used to hunting out hidden agendas? A common theme appears to be “this researcher has in the past received funding from Drug Company A; therefore we should distrust his research purporting to show the efficacy of Drug Company A’s new product”. Is this unfair or is there reason to doubt it? Goldacre: www.badscience.net/?p=251 ”…over the past few years there have been numerous systematic reviews showing that studies funded by the pharmaceutical industry are several times more likely to show favourable results than studies funded by independent sources”. Sinister? […]
Blog Atopowy » Blog Archive » Koniec homeopatii? said,
November 23, 2007 at 5:13 am
[…] masę pozytywnych, choć nieprawdziwych wyników? Ponieważ jest coś takiego jak „skrzywienie publikacyjne”. We wszystkich dziedzinach nauki, rezultaty pozytywne mają o wiele większą szansę bycia […]
Pushing Pixels » Blog Archive » re: depression said,
January 28, 2008 at 6:22 pm
[…] a cracking new analysis of the “publication bias” in the literature, a group of academics this week published a paper in the New England […]
jiangjiang said,
December 8, 2009 at 2:16 am
ed hardy ed hardy
ed hardy clothing ed hardy clothing
ed hardy shop ed hardy shop
christian audigier christian audigier
ed hardy cheap ed hardy cheap
ed hardy outlet ed hardy outlet
ed hardy sale ed hardy sale
ed hardy store ed hardy store
ed hardy mens ed hardy mens
ed hardy womens ed hardy womens
ed hardy kids ed hardy kids ed hardy kids
Pulling the Curtain Back from Scientific Publishing - said,
June 6, 2013 at 1:44 am
[…] you probably won’t be bothered with reading these boring results. Yes, we joyfully embrace publication bias. But in science, knowing something isn’t, or that it is routinely, are really valuable things to […]