I did a new talk at TED, on drug companies and hidden data.

September 28th, 2012 by Ben Goldacre in bad science, big pharma, onanism, podcast, publication bias | 10 Comments »

I did a new talk at TED about drug companies hiding the results of clinical trials, it went up today. This is a huge, ongoing problem, and it results in patients suffering and dying unnecessarily. So I’m really pleased that TED were able to give the story a platform. Video after the click: 


That said, all I can think when I watch this jetlagged video is “fix your hair, take your hands out of your pockets, and for god’s sake tell some jokes”.

If you want to read more about this problem of “publication bias”, there’s lots on this site, here, and even more in Bad Pharma, a book I wrote about how to fix the problem of bad behaviour in the pharmaceutical industry (and among doctors, academics, regulators, patient associations…).

More on publication bias:


A great blog from TED


The book:


The foreword:

Here’s the foreword to my new book.

An extract in the Guardian:


Previous TED talk:

If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

10 Responses

  1. mikey said,

    September 28, 2012 at 12:17 pm

    I can’t say I didn’t notice the hair, but the talk was really good, rapid pacing with good spacing and insiteful analogies that really helped keep things moving along. You really do seem to be one of the best at the TED talk format.

  2. robb said,

    September 28, 2012 at 8:25 pm

    Hi Ben,

    In my experience, papers with actual negative results are very rare, at least in medical science. When reviewers get over their surprise, they’re quite willing to accept them. There IS a bias against publishing *inconclusive* results – papers where the experiment failed to show a significant effect but where either the variance is simply too large to draw any meaningful conclusions or the authors have not attempted to calculate limits on the maximum effect.

    It seems to me the general problem is not publication bias (although it is certainly true that drug companies would prefer only positive studies be published) but rather the poor statistical skills of many researchers. They lack the ability to produce papers reporting negative results.

    Interpreting non-significant results as “negative” is dangerous. A pair of papers caused a stir a couple of months ago in my field, multiple sclerosis. One found that treatment with disease modifying therapies did have a beneficial effect on long term outcome. The other did not. An excited PhD student announced to our institution mailing list that we’d be having a special journal club to discuss these “opposite” results. He was quite confused when I commented that the two papers were completely consistent with each other.

    The “negative” paper, on closer examination, HAD calculated confidence intervals on their result. The high side of the 95% confidence interval corresponded to treatment being the single most predictive factor of disease progression studied, more so than such obvious factors as how fast your disease had progressed in the past, or how long you had had it. The lower CI just brushed 0. Nestled somewhere in the middle was the estimate of the positive effect from the other paper. The authors went so far as to suggest in their discussion that we should reconsider how we treat patients.

    This influential paper, from a good group and published in a good journal, had made one of the most basic statistical mistakes: assuming that a non-significant result meant “no effect.”

    Your thoughts?

  3. d10 said,

    September 29, 2012 at 7:25 am

    What was the specific paper you referenced as the systemic review of publication bias which was published in 2010? (referenced at 8:30 in the video)

  4. medic_cat said,

    September 29, 2012 at 10:55 am

    The references are all in the TEDMED Science pack here. www.tedmed.com/videos-info?name=Ben_Goldacre_-_Q&A_at_TEDMED_2012&q=updated&year=all

  5. 0101011 said,

    October 5, 2012 at 3:23 am

    Mandating the publication of all medical studies is as desirable as it is unpractical. Many studies have probably been “lost”. Costly law suits will have to be fought. Whether we should or should not I know not. All I know is it is not, by any stretch of the imagination, “simple”.

    In other words, you know a lot about science and little about law.

    None the less, much of your work remains dear to me. A statistically significant portion in fact.

  6. chemist.jones said,

    October 7, 2012 at 3:56 pm

    0101011, I don’t think you would necessarily have to retroactively publish results. How would you choose how far back to look? I think instead the best course of action is a fresh slate and a new requirement. This approach would be simple – or at least manageable.

    Ben, as usual you killed this TED Talk. You’re a very engaging speaker. It seems that the TED format was designed for you.

  7. rjmunro said,

    October 7, 2012 at 10:37 pm

    Shouldn’t NICE be able to fix this? If they have evidence that a drug has had trials that were not registered, they should not recommend it for use in the NHS, at least until the drug company pays for a large independent properly registered retrial.

  8. Amanda Acevedo said,

    October 11, 2012 at 6:00 pm

    You did great! And can I just say THANK YOU!!! Thank you for the information, thank you for being brave, thank you for addressing the problem and thank you for your books.

  9. Alexandre said,

    December 17, 2013 at 2:26 pm

    Hi Ben,

    Congratulations for your talk and for your efforts to promote transparency.

    As a data mining company, we do have the experience of working with pharmaceutical companies on analyzing their data, and beyond the fact that negative studies are not published as they should – the fact is that any type of AVERAGE results obtained in a few hundred – or even a thousand – patients, should be handled VERY carefully.

    Our experience is that in any – negative or positive – studies, there are usually several profiles of good responders and several profiles of non-responders. As well as several profiles of patients at risk of specific adverse events. These profiles are just “samples” of profiles existing in the millions of patients who might be candidates for these drugs.

    Key questions are :

    1) Are the pharmaceutical companies able to detect these profiles of good/bad responders, and profiles of patients at risk of adverse events using conventional statistics and modeling ?

    2) Is it likely to have all these profiles represented in successive clinical trials, and always in the same relative proportions ? Or do their proportions vary from a study to another ? (In other words, do sampling biases exist ?)

    3) When building models of response or toxicity based on these clinical trials, usual statistics necessarily makes the assumtions that these studies samples are comparable between them, and all representative of the mother population. Is this assumption acceptable given the fact that answers to questions 1) and 2) are obviously negative?

    I don’t want to be unrelevant here, but I really think that beyond disclosure and publication of negative results the real revolution Pharmaceutical companies should go through – for the sake of patients – will be to properly exploit their raw data by avoiding the flaws of average and really open the way to personalized medicine.

    I would love to have you thoughts about this.

    Kind regards,


  10. Ben Goldacre said,

    December 17, 2013 at 2:30 pm

    I agree there are huge benefits here. I discuss in Bad Pharma, and my comments to IoM (on tumblr), and in a piece coming soon in Clinical Trials!