Here’s a piece by me in the British Medical Journal this week, published online already, and in the print edition this Friday. It’s a head to head with Vincent Lawton, who until recently was head of Merck in the UK. Briefly, I set out the quantitative evidence demonstrating the scale of the problem, and he says: “oh, we’ve fixed everything now, and anyway some academic trials are dodgy too, here’s one what I found”. That’s a paraphrase, you can read his response for free on the BMJ website here, since they’ve decided that this is an important issue which deserves open access. If you’ve got something really clever to say about these pieces then you might also want to comment in the “Rabid Response” section of the BMJ version of either article.
We were going to have a debate on the Today programme on Monday morning, and then tomorrow morning, but unfortunately it’s been ditched. If you work in mainstream media and would like to cover this issue I’m always keen, and amazingly easy to get hold of, email@example.com. Although I realise that your idea of a meaningful critique of the crimes of big pharma is “chemotherapy hurt my grandma that’s why I love vitamin pills and hate teh vaxxines lol freedom”.
Incidentally, if the text is too small (on any site) hold down the CTRL key and press “=” or “+” on your keyboard.
Head to Head
Is the conflict of interest unacceptable when drug companies conduct trials on their own drugs? Yes
Ben Goldacre, doctor and writer
1 Nuffield College, Oxford OX1 1NF
Ben Goldacre argues that the financial interests of drug companies lead to distorted evidence, but Vincent Lawton (doi:10.1136/bmj.b4953) believes that adequate safeguards exist to keep bias in check
The practice of medicine is based on evidence. We need this evidence base to be complete, and of the highest quality, so that we can make the right decisions, but at present, drug companies produce most of the evidence we use. There is no doubt that these companies have a conflict of interest when they conduct trials: they want to sell their products, and so naturally they want a positive result from the trials they sponsor. But there is now good evidence from systematic reviews, meta-analyses, and case studies that this conflict of interest results in bad evidence, which distorts medical decision making and so harms patients.
We will start with a tangible story, from a single field. Rochon1 analysed the literature on non-steroidal anti-inflammatory drugs (NSAIDs), and found all the studies that had ever been published where one NSAID was compared to another. In every single trial, the sponsoring company’s drug was either equivalent to, or better than, the drug it was compared to: all the drugs were better than all the other drugs. Such a result is plainly impossible.
A systematic review2 found 30 studies investigating whether industry funding is associated with outcomes that favour the funder: studies sponsored by drug companies were more than four times as likely to have outcomes favouring the funder, compared with studies with other sponsors.
How does this systematic bias come about? One answer is questionable trial design. Studies are conducted, for example, where the competitor drug is given at an inadequate dose, or worse, at a higher dose, increasing the risk of side effects, and so making the sponsor’s drug appear to be preferable.3
Another common problem is that the industry can choose which data to publish, and which to leave unavailable. Much has been written on eye-catching stories, such as the difficulties in getting clear information about the number of suicide attempts in industry trials of SSRI antidepressants4 or the number of heart attacks in patients on rofecoxib (Vioxx).5
Equally concerning is the routine grind of publication bias, where disappointing negative results on the benefits of treatments quietly disappear. This phenomenon has been demonstrated in many fields, notably that of SSRIs,6 and in some areas of medicine its scale is staggering. Ramsey and Scoggins7 went to clinicaltrials.gov and found all the trials on cancer: 2028 in total. Only 17.6% of these trials could be found published on PubMed, but 64.5% of those that were published reported positive results. Restricting their analysis to only industry sponsored trials, these results became even more extreme: just 5.9% were on PubMed, but of those trials, 75.0% gave positive results.
And while disappointing results lie unpublished, positive results may be published repeatedly, in ways that are hard to spot. One group conducting a meta-analysis on the efficacy of ondansetron8 made a striking discovery: data from 3335 patients in nine trials had been published more than once, in 14 further reports. None of these duplicate publications used a clear cross reference, so there was no easy way for a casual reader to see that each was not a new trial. Crucially, and perhaps inevitably, data showing a greater benefit from ondansetron were significantly more likely to be published twice.
It is inevitable that publishing positive results more than once will cause doctors to think a drug is better than it really is, since doctors are busy, and cannot each conduct forensic checks on every trial they read. One study estimated that for physicians to read every published article relevant to primary care alone would take more than 600 hours a month.9 Duplicate publication and dubious methodological tweaks will be missed, and an illustration of how much these practices may cause doctors to overestimate a drug’s efficacy can be seen in the ondansetron meta-analysis, where including the duplicated data led to a 23% overestimation of the drug’s antiemetic efficacy.8
The problems I have described are not new, and they have been described on many previous occasions. They could be fixed, without taking research out of the hands of industry altogether, but to do so would require that the drug companies recognised the scale of this scandal, and campaigned themselves for more effective regulation: demanding full mandatory publication of all trial data from themselves and their competitors, for example.
Instead we see inertia, and the failure of regulators to engage adequately with these serious problems. In medicine, bad information leads to bad decisions: we prescribe one drug where an alternative would have been more effective, or had fewer side effects; or we prescribe an expensive drug, unnecessarily, when a cheaper alternative was equally effective, and so we deprive the community of limited healthcare resources. This is dangerous and absurd. Doctors who are making treatment decisions need access to good quality trial data, presented transparently, and all of it, not just the positive findings that drug companies choose to share.
Cite this as: BMJ 2009;339:b4949
Based on the Great Oxford Debate on 23 September 2009 at the Oxford Union, Oxford University, sponsored by PharmaTimes. Competing interests: BG has written newspaper articles and part of a book criticising questionable activities in the drug industry, and has a Clinical Research Training Fellowship from the Wellcome Trust.
- Rochon PA, Gurwitz JH, Simms RW, Fortin PR, Felson DT, Minaker KL, et al. A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med 1994;154:157.[Abstract/Free Full Text]
- Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003;326:1167-70.[Abstract/Free Full Text]
- Safer DJ. Design and reporting modifications in industry-sponsored comparative psychopharmacology trials. J Nerv Ment Dis 2002;190:583-92.[CrossRef][Web of Science][Medline]
- Fergusson D, Doucette S, Glass KC, Shapiro S, Healy D, Hebert P, et al. Association between suicide attempts and selective serotonin reuptake inhibitors: systematic review of randomised controlled trials. BMJ 2005;330:396.[Abstract/Free Full Text]
- Hippisley-Cox J, Coupland C. Risk of myocardial infarction in patients taking cyclo-oxygenase-2 inhibitors or conventional non-steroidal anti-inflammatory drugs: population based nested case-control analysis. BMJ 2005;330:1366.[Abstract/Free Full Text]
- Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252-60.[Abstract/Free Full Text]
- Ramsey S, Scoggins J. Commentary: practicing on the tip of an information iceberg? Evidence of underpublication of registered clinical trials in oncology. Oncologist 2008;13:925–9.[Abstract/Free Full Text]
- Tramèr MR, Reynolds DJ, Moore RA, McQuay HJ. Impact of covert duplicate publication on meta-analysis: a case study. BMJ 1997;315:635-40.[Abstract/Free Full Text]
- Alper BS, Hand JA, Elliott SG, Kinkade S, Hauan MJ, Onion DK, et al. How much effort is needed to keep up with the literature relevant for primary care? J Med Libr Assoc 2004;92:429-37.[Web of Science][Medline]