Ben Goldacre, Saturday 3 October 2009, The Guardian.
There are some very obvious problems that never seem to go away. Right now I can see 1,592 articles on Google News about one poor girl who died unexpectedly after receiving the cervical vaccine, and only 363 explaining that the post mortem found a massive and previously undiagnosed tumour in her chest. Meanwhile the Daily Mail this week continue their oncological ontology project with the magnificent headline: “Daily dose of housework could cut risk of breast cancer”.
But while the media wound themselves into an emotive frenzy of elaborate conspiracy theories, killer vaccines and industry cover-ups, the real death action was to be found hidden away in bland, dry data. This month the Journal of the American Medical Association quietly published one of the most damning papers to have appeared all year.
We have known for decades that academic publishing faces two serious problems. One is that trials often go missing in action: a drug company might do eight trials of a drug, say, but only two have a positive result. So those two positive trials will appear in an academic journal, while the six remaining with negative results quietly disappear. Bizarrely, regulatory bodies like the FDA get to see this negative data, but often enough doctors do not.
This is a familiar problem, and a murderous one, because overall the results of all 8 trials combined might show that the treatment is ineffective: in the absence of this full information, people are subjected unnecessarily to side effects, and deprived of other more effective treatments.
On top of that, we also know that researchers can mischievously change their stated goal, or “primary outcome”, after their trial has finished. You might do a trial on a blood pressure pill, for example, stating that you will look to see if it can reduce heart attacks, but find at the end that it doesn’t. Then you might retrospectively change the whole purpose of your study, ignore the heart attacks, pretend it was only ever about blood pressure, and glowingly report a reduction in blood pressure as if this was the primary outcome you were always interested in. Or you might measure so many different things that some of them will show up as positive changes simply by chance.
Both of these problems are supposed to have already been fixed by clinical trials registers: before you start your trial, you publish the protocol, saying exactly what your primary outcome is, how many people are in your trial, when it will finish, and so on. Then people can see if your trial has gone missing in action, and we can see if you have misled us, changing your primary outcome, by looking at the protocol and the finished academic paper side by side.
This only works if it is enforced. In 2005, the International Committee of Medical Journal Editors announced they would only publish trials that had been registered. Many journals check initial protocols against finished academic papers. So Sylvain Matthieu and colleagues checked up on the system: they gathered together all of the randomised controlled trials from cardiology, rheumatology, and gastroenterology in the 10 biggest general medical and specialty journals from 2008.
Of these 323 trials, less than half were adequately registered, before the end of the trial, with the primary outcome clearly specified. Trial registration was entirely lacking for 89 trials. This permissiveness means that drug companies know they can get away without registering trials, and so the deaths caused by missing data will continue.
Then they looked more closely at the trials which were properly registered, and found repeated discrepancies between the outcomes stated at registration and the outcomes published in the final paper, in a third of all papers: in almost all the papers where it was possible to assess the switch, duff outcomes were switched out in favour of one that showed a positive finding.
You might find it boring, but our failure to ensure full, undistorted publication of all trial data is the single most important issue in medicine today, because this is the only way we can know if a treatment does good, or harm. The story may be less emotive than one dead teenager, but it costs many more lives, and you should struggle to be angry about it, because the boring regulators we trust to monitor boring problems have repeatedly failed us on this one. Instead, we rely on good will and vague promises, monitored only by an occasional ad hoc analysis from an academic on a whim. This is a broken system. Write 1,592 stories about that.
Mathieu, S. et al., 2009. Comparison of Registered and Published Primary Outcomes in Randomized Controlled Trials. JAMA, 302(9), 977-984.
jcmacc said,
October 3, 2009 at 1:05 am
Sorry but I’m missing the evidence to support the exceedingly strong accusations made here, what support is there for “murderous” regulators and that Pharma are allowing “deaths” to continue? What evidence is there that the FDA and other regulatory authorities are deliberately keeping quiet about trial results they are aware of i.e. what new drugs are causing deaths or what patients are being given ineffective new drugs over effective old ones?
This is badly lacking context. What stage of clinical trials are primary endpoints being switched? Phase III is serious as those trials are designed to get authority for use thus can alter what patients are given, Phase I or II, so what? More trials with a redefined endpoint would have to be done after those trials before any patients were denied currently available, efficacious drugs.
The fact trial databases aren’t yet perfect isn’t a good thing but equally it does not prove “murderous” behaviour by regulators or Pharma. FFS, it’s like some anti-establishment conspiracy from Whale.to
thepoisongarden said,
October 3, 2009 at 8:47 am
I suppose it’s part of the usual problem of trying to change the system from within. You write a piece about people being unwilling to publish things which don’t suit them and the Guardian edits out your reference to a Daily Mail story.
But, hooray, hooray for the rest of the opening paragraph. You can guarantee that, years from now, there will still be stories suggesting this poor girl died because of the vaccine.
ayupmeduck said,
October 3, 2009 at 9:13 am
@jcmacc – I guess that evidence is in the Journal of the American Medical Association paper that Ben refers too. Don’t know exactly which paper this is though, or if it’s available without nipping down to my local Uni library.
J155 said,
October 3, 2009 at 10:25 am
I don’t want to nit-pick, but I remain unconvinced that the “switching endpoints” phenomenon that you’re talking about is a real problem. I know that you, Ben, have an unhealthy obsession with type I error (as if that’s the real problem with valid inference) but given how easy it is to specify the exact probability of getting a type I error, and the fact that this is pretty easy to manipulate (in the sense that you can make the “test” “harder” very easily), I remain unconvinced that it’s a real problem.
Suppose we have a trial of a drug that wants to cut heart disease but the data says that the drug doesn’t do that, but it suggests a substantively important effect upon blood pressure, do you really want them to throw that data away and not publish just because it wasn’t the original goal of the research? If the effect size is large on the effect and if we have suitable values to satisfy our concerns about the inference (i.e. a reasonable type I and type II rate) then go for it, I say.
you could try reading “the cult of statistical significance: how the standard error cost us jobs, justice and lives”
srbishop said,
October 3, 2009 at 10:26 am
@jcmacc I believe the paper is JAMA 302 (9), 977-984 (2009) jama.ama-assn.org/cgi/content/abstract/302/9/977
Hopefully Ben can link this in when he gets chance.
I understand Ben’s piece as saying that keeping quiet on certain data misleads doctors, leading them to incorrectly prescribe and thereby indirectly leading to unnecessary deaths or at least less effective medicine. If true, then the knock-on effect is obvious (and ignored).
Fyi he doesn’t call the regulators murderous, only the problem.
srbishop said,
October 3, 2009 at 10:30 am
@J155 I agree that the data on blood pressure should be published, but what’s the harm in them also publishing that they were originally looking at the effect on heart disease and found no effect (or a negative effect)? Withdrawing (or ignoring) that information negates any benefit the blood pressure data yields.
tialaramex said,
October 3, 2009 at 11:11 am
J155, what if the “heart disease” drug seems to cure hiccups? Reduce incidence of skin cancer? Improve hand-eye co-ordination? Make your team more likely to win at football?
In practice the effect size is usually pretty small in this type of research, so we’re not talking about missing out on a breakthrough blood pressure drug just because some fool thought it might cure heart disease.
But you also skipped the question of honesty and ethics. If I lie, but with good intent, is that not still unethical ? When the endpoint is changed and the final paper doesn’t mention that, the researchers are lying to you. As with my dice example, this can make a critical difference to the interpretation of their results, and in many cases the researchers simply aren’t qualified to know if their conclusions are still valid – they don’t ask the team statistician “is this method still valid if I’ve lied about the experiment” for obvious reasons.
Ben Goldacre said,
October 3, 2009 at 11:42 am
reference added
the present system of selective non publication of unflattering evidence means that the entire evidence base which doctors use to make treatment decisions is distorted, so people are constantly in small or large ways making decisions about healthcare where their judgements about the relative benefits, risks, costs, and opportunity costs of treatments are based on evidence which has been distorted. in medicine, irrational healthcare decisions causes unnecessary suffering and costs lives.
there is absolutely no excuse for not having simple clear regulations in place that make people publish all their protocols and the true unspun data they collected from patients in trials in a timely fashion. if the question is how do you fix the system then i would say journals have shown themselves to be failing and we should consider developing alternative models for dissemination of results along the lines of people uploading data, inclusion and exclusion criteria etc to databases with a standardised structure roughly following CONSORT, but that is a looong discussion.
HolfordWatch said,
October 3, 2009 at 2:08 pm
Ben Goldacre wrote:
This seems like a useful extension of the discussion initiated in 2000:
McCormack J, Greenhalgh T. Seeing what you want to see in randomised controlled trials: versions and perversions of UKPDS data. United Kingdom prospective diabetes study. BMJ. 2000 Jun 24;320(7251):1720-3.
Sili said,
October 3, 2009 at 6:23 pm
Wouldn’t it be an idea for the editors (they still exist, don’t they) to add that information to the beginning of all articles published? It’s not like there’s any real length limit on the internet, anyway. So why not put in a standard paragraph: “The trial reported in this study was(was not) registered and the stated goal was (was not) reported to be yaddah yaddah”?
Staphylococcus said,
October 4, 2009 at 4:38 am
J155 said:
“Suppose we have a trial of a drug that wants to cut heart disease but the data says that the drug doesn’t do that, but it suggests a substantively important effect upon blood pressure, do you really want them to throw that data away and not publish just because it wasn’t the original goal of the research?”
The problem is that the original trial wasn’t designed to find factor X. It was designed to determine the effect of the drug on ailment Y and therefore may not be properly controlled for any other effects that come out. That’s not to say that it doesn’t really help with factor X (hence the data should not be ignored), but it certainly doesn’t demonstrate it satisfactorily. The only real conclusion you can draw is that if you really think this drug will help factor X you should do a proper trail designed to find it.
Filias Cupio said,
October 4, 2009 at 1:36 pm
J155:
There is nothing to say they can’t report the blood pressure result. They just have to make it clear that the drug failed to produce its expected primary result (reducing heart attacks.) People can make their own judgement as to what it means.
In this particular case, I think the logic goes as follows: high blood pressure is a big danger factor in heart attacks. For the most part, lowering blood pressure is only useful because it is presumed to prevent heart attacks. If you’re doing a trial on the cheap, you might measure only blood pressure (a ‘surrogate outcome’) instead of heart attacks. If the drug helps blood pressure, you presume it is good because lower blood pressure should prevent heart attacks. However, drug which lowers blood pressure but does not prevent heart attacks is of little or any use.
(Note, I am not a medical doctor – for all I know, high blood pressure has other bad effects, in which case the drug could still be useful if it prevents those.)
An analogy: A given road is particulaly bad for fatal car crashes. The council paints confusing markings on the road, on the theory that it will cause people to slow down, and so be less likely to die from a crash. They monitor the traffic, and find that indeed the cars are now slower, and pat themselves on the back and conclude the plan was a success. However, there are actually more crashes now, and even with a higher survival rate, more deaths. If the council claimed they would measure deaths, but found the speed was good but the deaths were bad, and so published only the speed data and pretended they never collected the deaths data, that an example of changing the stated outcome.
PaulB said,
October 4, 2009 at 3:15 pm
If investigators are free to choose which of several possible hypotheses they will use their data to test, and they choose the hypothesis which the data seem best to support, then single-hypothesis significance tests are rendered invalid. Methods such as the Bonferroni correction are available to give valid significance calculations.
If the investigators publish a report which fails to acknowledge that the hypothesis examined was chosen in the light of the data gathered, then presumably they will also fail to use an appropriate significance test, and readers of the report will not be able to tell that the significance reported is invalid.
So there are two problems in covertly changing the primary outcome. One is that data failing to support the original hypothesis will not be published as such. The other is that the results will appear to support the new hypothesis more strongly than in fact they do.
There is no problem in openly changing the primary outcome. But the results will usually be weak.
Bridget said,
October 4, 2009 at 4:57 pm
I know this is slightly off the point but has anyone had the misfortune to see the headline story in The Sunday Express today? Some American doctor says of the death of that poor wee girl some hours after her vaccine that “it’s implausible that she died of a tumour” essentially because she did not have symptoms prior to that. So, not only does this “expert” say the coroner is lying (based on what I ask?) but she clearly has little clinical nous. Those of us working in frontline hospital medicine have all been shocked by the rapid death of (a) patient(s) (delete according to years in practice) who was “previously fit and well” who has some weirdly aggressive tumour.
This is clearly going to run as the Sunday Express prefers nonsense and scaremongering to the simple tragedy that young people can die of strange things, albeit rarely.
I wonder what Jade Goody would say?
Bigglestopcat said,
October 4, 2009 at 6:45 pm
I am no scientist but I remember well my late father’s shock when, in the early days of bronchodilators my father was running a clinical trial for the drug company then known as Astra, for whom he had conducted a number of trials in the past. Astra did not like my father’s results and asked him to look again at his data to see if the results might not be different. My father, of course, refused so not only did Astra scrap the trial and the results were never published, they dumped my father and never funded a trial with him again. Patients first, eh??
PS. When Andy Burnham-at-the-stake lets me register with any GP practice I am planning to register with the practice that covers Unst (the northern most island of the Shetlands) and then request a home visit and, perhaps, even better an out of hours visit from the company that covers Unst; now that should be really interesting. Oh, by the way,I live in London. My day to day needs can be covered by walk in clinics.
skyesteve said,
October 4, 2009 at 10:06 pm
Part of the problem with publishing all the evidence for ordinary coal-face docs is that they may not have the knowledge, time or patience to read it all and then come up with a valid, evidence-based decision. They therefore have to rely on those carrying out the trials to report overall conclusions which are factual and objective rather than “spin”. They need journals like the Lancet, NEJM and BMJ to carry objective commentaries on all the original research papers they publish as to the veracity of the claims contained in their conclusions based on the evidence presented (too often I see papers where the conclusions could not be reasonably assumed from the data presented yet it passes without comment – Lancet and MMR scare springs to mind). And they need organisations like SIGN and NICE to be thorough in their diligence and recommendations and to also “fess up” when they get it wrong (aspirin for primary prevention anyone?).
Part of my problem is that a lot of research I read seems to be pointed in the direction of problems that don’t really exist. A new antihypertensive? Well we already have 5 or 6 categories of drugs which have been shown to be (relatively) safe and (relatively) effective, not just at reducing blood pressure but also at improving morbidity and mortality figures (just so you can die of something else instead of course but hopefully a few years later). And whilst I’m the first to acknowledge how much better people feel when they stop their beta blockers (so much so I wouldn’t want them unless I really, really needed them) why do we need another new antihypertensive? Surely for all new clinical research the bottom line should be absolute benefits and numbers needed to treat.
To my mind too much of current research seems to be about making money not helping jo(e) bloggs
Oh, and by the way Bigglestopcat, your right the practice registration idea is ludicrous if applied to the UK as a whole but my understanding is at present it just applies to the London area, in which case there may be some merit in the idea of dual registration provide each practice has access to all the records and there is good communication between them. In any case at present there is nothing to stop anyone going to any GP anywhere and anytime they want provided the need is urgent (and that includes Unst where, I think, you would get seen without question as a temporary resident whether the need was urgent or not and where I suspect you would receive excellent services from Dr Hamilton and his team at the Hillsgarth Surgery…)
chris lawson said,
October 5, 2009 at 3:46 am
J155,
There is a big difference between an abstract which says,”Drug X resulted in a 30% reduction in BP but this was not associated with any reduction in heart attacks” as opposed to, “Drug X caused 30% reduction in BP OMFG!!!”
michaelandreas said,
October 5, 2009 at 8:34 am
If you have negative findings, you might have problems
with finding a journal which publishes your study report.
So you have to look for surrogate outcomes which are
“significant”.
Malcolm said,
October 5, 2009 at 12:10 pm
Ben seems to be implying that the FDA approves drugs in the full knowledge that they are ineffective. That would be a conclusion both unfair to the FDA as well as an oversimplification that distracts from the real damage caused by publication bias.
The full reviews of any FDA approved drug are available on the fda.gov website thanks to US Freedom of Information law (at least for anything approved within recent memory). If you’re mildly obsessive about evidence-based medicine, it’s worth your while to pick a drug of interest and start reading.
FDA statisticians consider all the available data, including that from any unpublished, negative trials. Furthermore, trials typically need to registered at protocol stage, so burying negative results is difficult. For any given indication, the FDA publishes guidelines (also freely available) about the type and level of evidence required to demonstrate efficacy.
The FDA makes its mistakes but I see no sign of a conspiracy to hoodwink the public. Guidelines continue to improve, and the bar for safety and efficacy seems to be getting higher in most therapeutic areas. If you dig into the comments of the FDA reviewers, you’ll usually find conclusions that are more sober and carefully considered than those published in the NEJM, JAMA, Lancet etc.
jcmacc said,
October 6, 2009 at 12:26 am
Ben says:
“the present system of selective non publication of unflattering evidence means that the entire evidence base which doctors use to make treatment decisions is distorted, so people are constantly in small or large ways making decisions about healthcare where their judgements about the relative benefits, risks, costs, and opportunity costs of treatments are based on evidence which has been distorted. in medicine, irrational healthcare decisions causes unnecessary suffering and costs lives.”
The problem with this otherwise accurate statement is that it is not supported by the JAMA paper in question. Most clinical trials do not aim to alter medical practice, they aim to justify later pivotal studies or eliminate those new drugs that aren’t good enough to test in those types of study.
The JAMA paper is an intersting exercise but is a selective reporting of clinical trials only from journals they consider important (i.e. “high impact”) in very selective therapy areas – and thinking of “costing lives” and the importance of accurate reporting etc, areas such as oncology where efficacy to toxicity margins are crucial are not part of the analysis.
Crucially the JAMA paper has no useful breakdown of accuracy of reporting against trial stage thus lumps in phase I trials with phase III and even phase IV as if the importance of primary endpoint is equally valid in all contexts, it just isn’t. The authors of the paper, like Ben, have assumed all trials influence the ultimate prescribing patterns of drugs to patients which, if you understand trials can’t be true, otherwise there would only be a single phase of testing and no such concept as phase III or “registration trials”.
If the JAMA paper was about pivotal trials only, instead of all trials regardless of stage in selected journals, the findings would have real implications, and Ben’s conclusions would be correct. However, the JAMA paper describes trial databases being uploaded with 220 new trials per week all of which it holds to equal standards. There’s no way on earth that 220 phase III registration trials, which will influence prescribing and thus patient safety, begin each and every week.
LindsayW said,
October 6, 2009 at 1:01 pm
Perhaps someone can help me with something that’s puzzled me for a while; if someone decides to do a meta-analysis based only on published data, and published data is biased towards showing a positive outcome, can the meta-analysis detect such a bias, reveal it and then take it into account, or is the meta-analysis flawed by its input studies?
harveydodd said,
October 6, 2009 at 5:38 pm
jcmacc:
‘What evidence is there that the FDA and other regulatory authorities are deliberately keeping quiet about trial results they are aware of i.e. what new drugs are causing deaths or what patients are being given ineffective new drugs over effective old ones?’
I’m just not sure Ben suggested this. I think he said that regulators had failed to follow up promises they made about trial results. But I’m really not sure that Ben suggested this.
Calm down next time!
olster said,
October 6, 2009 at 5:51 pm
I think the usual way is to select out the high quality trials (randomised placebo controlled double blind etc) then as you weed out the weakest methodology trials, you begin to move closer to where we think the ‘best evidence’ is pointing.
The best examples I know of this are all CAM meta-analyses. Given initial all-inclusive meta-analysis may be ambiguous or show a dubious positive (like just about every kind of CAM bias out there), then once you get rid of the flawed studies from your exhaustive list, the decent & worthwhile studies are left and hopefully are less susceptible to these biases. Of course, publication bias is probably the most difficult one to counter in this way…
jcmacc said,
October 6, 2009 at 6:41 pm
harveydodd #22 said:
“I’m just not sure Ben suggested this. I think he said that regulators had failed to follow up promises they made about trial results. But I’m really not sure that Ben suggested this”
He didn’t suggest it, he stated it. The statements I objected to are these:
“Bizarrely, regulatory bodies like the FDA get to see this negative data, but often enough doctors do not.This is a familiar problem, and a murderous one, because overall the results of all 8 trials combined might show that the treatment is ineffective…etc”
This clearly states the FDA hold back safety data and using the word “murderous” actually implies an intent to kill in the system given the definition of “murderous”. Unlike the Simon Singh “bogus” argument about definitions, Ben has not proposed any other meaning of the word that would mitigate the meaning by context. It’s far too emotive to use that language especially as the JAMA paper can not support any of the concerns given it’s weaknesses in lumping all types of trials as one uniform body.
Also:
“The story may be less emotive than one dead teenager, but it costs many more lives, and you should struggle to be angry about it, because the boring regulators we trust to monitor boring problems have repeatedly failed us on this one.”
I’m not sure what meaning you get from this quote that I’m missing other than that regulators are directly responsible for deaths.
And to repeat : I’m not trying to make light of the core issue, but we’re dealing with evidence-based claims and the JAMA paper does not give any information to support what Ben over-emotively concludes. Ben would rightly tear apart a woo-meister making grand claims about deaths using a scientific paper that wasn’t connected or wrongly interpreted, so I think he should be held to the standards he himself applies to others.
Ben Goldacre said,
October 6, 2009 at 7:08 pm
malcolm: “Ben seems to be implying that the FDA approves drugs in the full knowledge that they are ineffective.” no, i’ve not said that, nor do i imply it. i said that they keep data to themselves which would be useful to people who want to know how effective a drug is. a good example of this was the antidepressants episode.
jcmacc: i don’t say that the FDA hold back safety data, i explain that they hold back some efficacy data which drug companies choose not to publish. this is so commonplace that when I debated the head of wyeth at the oxford union last week, me and fiona godlee were amazed to find that he thought this was perfectly reasonable. from your posts, you seem not to believe that witheld data leads to irrational prescribing decisions and so increased suffering and death at a population level. that’s your position, but i don’t think it’s a common one and i cannot empathise with it, or even imagine how you manage to hold it. distorted data leads to inferior treatment decisions. that, in medicine, is a “really bad thing”.
jcmacc said,
October 6, 2009 at 10:35 pm
Ben ~25 said:
“from your posts, you seem not to believe that witheld data leads to irrational prescribing decisions and so increased suffering and death at a population level. that’s your position, but i don’t think it’s a common one and i cannot empathise with it, or even imagine how you manage to hold it. distorted data leads to inferior treatment decisions. that, in medicine, is a “really bad thing”.
Thanks for replying but that’s a really terrible misrepresentation of what I’m questioning. Obviously you “can’t imagine how I hold my opinion” because you’ve not read what I’ve actually said.
My posts clearly show that I think witheld data from registration or pivotal phase III trials would have that effect and be a very bad thing for patient safety. That’s becuase those trials uniquely are the ones that would alter prescription practice. It’s stated clearly in my first post. How have you missed that?
What I’ve said, as have others, is that altered primary endpoints on phase I or even phase II trials would have no influence whatsoever on patient safety because none of those trials are designed to, or should, influence what drugs a patient population gets, regardless of outcome. They simply provide evidence to justify the large phase III trials that are powered to give results strong enough to influence patient prescriptions.
I thought your catchphrase was “I’m afraid it’s more complex than that” but you’ve repeatedly ignored the complexity that’s important here.
So my question to you is this: from the JAMA paper you rely on, can you tell me which trials have ambiguous or altered endpoints? Are all trials impacted equally or is it the case that the trials with altered or badly reported primary endpoints are the early stage ones?
To me if those trials are early stage trials, where endpoints are going to be more exploratory by definition, there’s little problem. If however the endpoints on phase III trials are being modified after the fact, your concerns are crucial. It’s a missed opportunity for the JAMA authors not to have focused the analysis on pivotal and regulatory-approval studies or at least provided a breakdown of findings against trial type.
Glenwright said,
October 8, 2009 at 8:48 am
Meanwhile, in La La Land, the denials have begun about a tumour being the cause of death in the tragic case of Natalie Morton.
www.naturalnews.com/027151_cancer_cervical_cancer_Natalie_Morton.html
“Blaming the girl, not the vaccine
Today, the mainstream media is reporting an obviously-fabricated explanation for her death. A pathologist is declaring that Natalie died from a “malignant chest tumor” that just coincidentally and suddenly killed her within hours after she received the cervical cancer vaccine.
This explanation is obviously a cover story to protect the vaccine industry; and it’s not even a convincing cover story at that. Natalie Morton had never been diagnosed with a chest tumor before, and she showed absolutely no symptoms of a cancer tumor. Chest tumors don’t just “lash out” and attack their hosts all of a sudden, without warning. A typical death from a cancer tumor is more often a slow, painful wasting away that can take months or years. Natalie Morton was killed in hours, and the description of her symptoms exactly matches what might be expected from an inflammatory reaction to a chemical vaccine.”
aliportico said,
October 8, 2009 at 10:26 am
I read this morning that the speed the PM was done at was clearly a sign of panic and guilt.
My friend’s daughter’s school have their HPV jabs today. The day after Natalie Morton died, the school made sure all the girls knew about it and told them to make sure their parents knew about the situation in case it would affect their decisions. Unbelievable.
Anax said,
October 8, 2009 at 1:18 pm
A bit off topic, but at the recent NCRI conference where a certain big pharma company supplied laptops for internet access, I couldn’t help notice that they had added badscience.net to their blocking filters. I’ll have to check now how many of the studies mentioned in Mathieu et al. deal with their drugs…
jcmacc said,
October 8, 2009 at 6:45 pm
Anax #29 said:
“I’ll have to check now how many of the studies mentioned in Mathieu et al. deal with their drugs…”
Good luck trying to extract any information like that from the Mathieu paper. As I’ve been trying to say, there’s absolutely no information on drug, Pharma, phase of trial, nature of endpoint or anything that would give basic context to the issues they are trying to highlight.
tomrees said,
October 8, 2009 at 10:17 pm
@LindsayW: “if someone decides to do a meta-analysis based only on published data, and published data is biased towards showing a positive outcome, can the meta-analysis detect such a bias, reveal it and then take it into account, or is the meta-analysis flawed by its input studies?”
Yes, they can. Basically, a big study is likely to get published whatever the results. It’s the small studies that don’t show up. If the small studies show an effect, but the big studies don’t, then that’s evidence that small, negative studies aren’t being published.
Santiago G Moreno said,
October 10, 2009 at 7:39 pm
Although I do not usually promote my work, I will make an exception since I truly believe this can be helpful. The PhD project I am carrying out is exactly about publication bias and outcome reporting bias.
Recently we published a paper in the BMJ (www.bmj.com/cgi/content/full/bmj.b2981) where a couple of methods were presented to tackle these reporting biases. Of course, these methods are not panacea but represent an improvement with respect to what was available until then.
Robert Carnegie said,
October 11, 2009 at 1:44 am
Let’s imagine – suppose you have a new untested heart drug which you think could reduce heart disease. (Never mind why you think that.) Suppose the actual effect of the drug is simply to destroy heart muscle tissue so that it’s dead and no longer functions. But would that lower blood pressure – if the heart isn’t working properly?
Sources I just looked at do seem to say no, it would put blood pressure up, so it’s quite a bad example.
gezznz said,
October 30, 2009 at 6:59 pm
I have never heard of anyone dropping dead from a “massive, previously undiagnosed” tumour. You have to be almost brain-dead not to feel unwell with advanced cancer. Something smells rotten, I reckon.