Evidence to House of Commons Sci Tech Select Committee on Research Integrity

December 5th, 2017 by Ben Goldacre in alltrials campaign, publication bias | No Comments »

Sorry not to be in regular blogging mode at the moment. Here’s a video of our evidence session to parliament, where they are running an inquiry into research integrity. I think clinical trials are the best possible way to approach this issue. Lots of things in “research integrity” are hard to capture in hard logical rules,  so you end up with waffly “concordats” and rules that are applied inconsistently. With clinical trials you can make clear rules, you can measure compliance, and you can enforce compliance. There is lots of chat about this in the video below from 17:37 with me, Simon Read the rest of this entry »

How do the world’s biggest drug companies compare, in their transparency commitments?

July 27th, 2017 by Ben Goldacre in bad science | No Comments »

Here’s a paper, and associated website, that we launch today: we have assessed, and then ranked, all the biggest drug companies in the world, to compare their public commitments on trials transparency. Regular readers will be familiar with this ongoing battle. In medicine we use the results of clinical trials to make informed treatments about which treatments work best; but the results of clinical trials are being routinely and legally withheld from doctors, researchers, and patients. This is a problem for industry sponsored trials, and for trials funded by governments and charities.

So what did we find? The results on the individual companies are important, but we also came across some fascinating patterns. While companies superficially have commitments to register and report clinical trials, in reality, there are often huge gaps in their policies, with many failing to include past trials (trials on the medicines we use today) and trials on off-label uses or unlicensed medicines, which are both important. We also found a huge range of commitments, which is exactly what audits are good for: identify who’s doing well, and who’s doing badly, so that everyone can learn from the best players. Lastly, as we went along we collected some fascinating examples of problematic policies, ambiguous language, inconsistent commitments, odd exclusions, and so on.

Overall this audit was a huge project, and we hope it will be widely used. You can see which companies are the best, and the worst. If you’re a researcher trying to get information on a trial from a company, you can use this to determine whether a company are breaching their commitments. If you’re an ethical investor (at the AllTrials campaign we have a network of dozens, covering €3.5t trillion of investments) you can use this to guide your activist investment choices. 

The full methods and results can be read, for free, in the paper. But we’ve also built a nice interactive website with mySociety (coming soon) to make the data more accessible. We think this is an important aspect of communicating results and making them useful, and used, and we’re keen for feedback on the site.

Coming next, we have ranked the policies of non-industry trial funders, and that paper will land shortly. We also have some great new and improved projects launching soon where we track the performance of institutions, rather than their promises: the proportion of their completed trials for which they have shared results. Meanwhile, you can read more about the battle for unreported clinical trials at AllTrials.net.

Onward!

Meaningful Transparency Commitments: the WHO Joint Statement from Trial Funders

July 26th, 2017 by Ben Goldacre in bad science, publication bias | No Comments »

By now I hope you all know about the ongoing global scandal of clinical trial results being left unpublished, and of course our AllTrials campaign. Doctors, researchers, and patients cannot make truly informed choices about which treatments work best if they don’t have access to all the trial results. Earlier this year, I helped out with a World Health Organisation project to get non-industry clinical trial funders signed up to making better policies on transparency. This BMJ editorial (sorry, I’m late posting it, published last month!) describes the new commitments, and why this commitment is more convincing than previous vaguer statements.  Read the rest of this entry »

How many epidemiologists does it take to change a lightbulb?

February 1st, 2017 by Ben Goldacre in bad science | No Comments »

Robin Ince just asked if I know any epidemiologist lightbulb jokes. I wrote this for him.

How many epidemiologists does it take to change a lightbulb?

We’ve found 12,000 switches hidden around the house. Some of them turn this lightbulb on, some of them don’t; some of them only work sometimes; and some of them work sometimes, but twenty years after you flick them. Some of the switches only work, sometimes, twenty years later, if one of the other switches is flicked too (and at the right time). In any case the wiring’s rusty, everything’s completely different in the house next door, and by the way there are lots of people selling spare bulbs who tell lies about houses, switches, and fingers.

We can change the lightbulb, but I’m not sure that’ll stop you dying from cancer in this metaphor.

“Transparency, Beyond Publication Bias”. A video of my super-speedy talk at IJE.

October 11th, 2016 by Ben Goldacre in bad science | No Comments »

People often talk about “trials transparency” as if this means “all trials must be published in an academic journal”. In reality, true transparency goes much further than this. We need Clinical Study Reports, and individual patient data, of course. But we also need the consent forms, so we can see what patients were told. We need the analytic code, so we can see exactly how the data were analysed. We need access to post-publication peer review, so we can see what design flaws others have identified. And we don’t just need these things to be publicly available, in some form or another: we ideally need them to be available as open data, freely shareable and re-usable, which is a very different kettle of ballpark. And then, of course, we need this data to be used, which means we need to think about building tools that make it useful. Finally, we can’t just whine about the world not being as we would wish it to be, or write academic papers describing the problem: we need a practical theory of change, and a set of clear strategies that will deliver greater transparency.  This is my talk at the International Journal of Epidemiology Conference, 2016. It takes 29 minutes of your life, at speed: I hope you find it useful.

 

You should totally watch this entire day of the IJE conference

October 7th, 2016 by Ben Goldacre in bad science | No Comments »

Today marks the end of an era. The International Journal of Epidemiology used to be a typical hotchpotch of isolated papers on worthy subjects. Occasionally, some were interesting, or related to your field. Under Shah Ebrahim and George Davey-Smith it became like nothing else: an epidemiology journal you’d happily subscribe to with your own money, and read in the bath. Read the rest of this entry »

An audio interview with The Conversation, on smashing the walls of the Ivory Tower

October 3rd, 2016 by Ben Goldacre in bad science | No Comments »

The Conversation is a great media outlet, because it’s run by academic nerds, but made for everyone. I had a nice time chatting with them last week: we discussed transparency, data sharing, statins, research integrity, risk communication, culture shift, academic activism, and why we should kick through the walls of the ivory tower. Caution: contains nerds!

theconversation.com/speaking-with-bad-pharma-author-ben-goldacre-about-how-bad-research-hurts-us-all-65800

Sarepta, eteplirsen: anecdote, data, surrogate outcomes, and the FDA

September 30th, 2016 by Ben Goldacre in bad science | 4 Comments »

The Duchenne’s treatment made by Sarepta (eteplirsen) has been in the news this week, as a troubling example of the FDA lowering its bar for approval of new medicines. The FDA expert advisory panel decided not to approve this treatment, because the evidence for any benefit is weak; but there was extensive lobbying from well-organised patients and, eventually, the FDA overturned the opinion of its own panel. There have been calls for paper retractions, and so on.

This is not the first time we’ve seen peculiar activity around the treatment. Read the rest of this entry »

The Cancer Drugs Fund is producing dangerous, bad data: randomise everyone, everywhere!

September 28th, 2016 by Ben Goldacre in bad science | 6 Comments »

There are recurring howls in my work. One of them is this: in general, if you don’t know which intervention works best, then you should randomise everyone, everywhere. This is for good reason: uncertainty costs lives, through sub-optimal treatment. Wherever randomised trials are the right approach, you should embed them in routine clinical care.

This is an argument I’ve made, with colleagues, in endless different places. New diabetes drugs are approved with woeful data, small numbers of patients in trials that only measure blood tests, rather than real-world outcomes such as heart attack, renal failure, or death: so let’s roll out new diabetes treatments in the NHS through randomised trials. We rely on observational studies to establish whether Tamiflu reduces complications of pneumonia: that’s silly, we can do trials, and we should. Statin treatment regimes in widespread use have never been compared head-to-head, using real-world outcomes such as heart attack, stroke, and death: so let’s embed randomised trials as cheaply as possible in routine clinical care (we’ve done two pilots, to document the barriers).

This week a dozen colleagues and I published yet another application of this basic, simple principle, as an editorial in the BMJ. The Cancer Drugs Fund is being marketed as a way to generate new knowledge: but in reality, the data that will be collected is weak, Read the rest of this entry »

Taking transparency beyond results: ethics committees must work in the open

September 23rd, 2016 by Ben Goldacre in bad science | 2 Comments »

Here’s a useful paper we’ve just published in the BMJ, documenting problems in transparency around approval processes for randomised trials. There’s a basic rule in clinical research: you’re only supposed to do a trial comparing two treatments when you really don’t know which one is best, otherwise you’d be knowingly randomising half your participants to an inferior treatment. Despite this, it’s already known that trials are sometimes conducted where one group get a substandard treatment.

We wanted to find out how ethics committees come to approve such trials. Read the rest of this entry »