People often talk about “trials transparency” as if this means “all trials must be published in an academic journal”. In reality, true transparency goes much further than this. We need Clinical Study Reports, and individual patient data, of course. But we also need the consent forms, so we can see what patients were told. We need the analyticcode, so we can see exactly how the data were analysed. We need access to post-publication peer review, so we can see what design flaws others have identified. And we don’t just need these things to be publicly available, in some form or another: we ideally need them to be available as open data, freely shareable and re-usable, which is a very different kettle of ballpark. And then, of course, we need this data to be used, which means we need to think about building tools that make it useful. Finally, we can’t just whine about the world not being as we would wish it to be, or write academic papers describing the problem: we need a practical theory of change, and a set of clear strategies that will deliver greater transparency. This is my talk at the International Journal of Epidemiology Conference, 2016. It takes 29 minutes of your life, at speed: I hope you find it useful.
Today marks the end of an era. The International Journal of Epidemiology used to be a typical hotchpotch of isolated papers on worthy subjects. Occasionally, some were interesting, or related to your field. Under Shah Ebrahim and George Davey-Smith it became like nothing else: an epidemiology journal you’d happily subscribe to with your own money, and read in the bath. Read the rest of this entry »
The Conversation is a great media outlet, because it’s run by academic nerds, but made for everyone. I had a nice time chatting with them last week: we discussed transparency, data sharing, statins, research integrity, risk communication, culture shift, academic activism, and why we should kick through the walls of the ivory tower. Caution: contains nerds!
The Duchenne’s treatment made by Sarepta (eteplirsen) has been in the news this week, as a troubling example of the FDA lowering its bar for approval of new medicines. The FDA expert advisory panel decided not to approve this treatment, because the evidence for any benefit is weak; but there was extensive lobbying from well-organised patients and, eventually, the FDA overturned the opinion of its own panel. There have been calls for paper retractions, and so on.
There are recurring howls in my work. One of them is this: in general, if you don’t know which intervention works best, then you should randomise everyone, everywhere. This is for good reason: uncertainty costs lives, through sub-optimal treatment. Wherever randomised trials are the right approach, you should embed them in routine clinical care.
This is an argument I’ve made, with colleagues, in endless different places. New diabetes drugs are approved with woeful data, small numbers of patients in trials that only measure blood tests, rather than real-world outcomes such as heart attack, renal failure, or death: so let’s roll out new diabetes treatments in the NHS through randomised trials. We rely on observational studies to establish whether Tamiflu reduces complications of pneumonia: that’s silly, we can do trials, and we should. Statin treatment regimes in widespread use have never been compared head-to-head, using real-world outcomes such as heart attack, stroke, and death: so let’s embed randomised trials as cheaply as possible in routine clinical care (we’ve done two pilots, to document the barriers).
This week a dozen colleagues and I published yet another application of this basic, simple principle, as an editorial in the BMJ. The Cancer Drugs Fund is being marketed as a way to generate new knowledge: but in reality, the data that will be collected is weak, Read the rest of this entry »
Here’s a useful paper we’ve just published in the BMJ, documenting problems in transparency around approval processes for randomised trials. There’s a basic rule in clinical research: you’re only supposed to do a trial comparing two treatments when you really don’t know which one is best, otherwise you’d be knowingly randomising half your participants to an inferior treatment. Despite this, it’s already known that trials are sometimes conducted where one group get a substandard treatment.
Hi there, I’m doing a few events in Australia and NZ this week: in Sydney, Melbourne, Auckland (only 25 tickets left), and Brisbane. Here‘s a good fun interview I did with The Conversation that gets very nerdy, on the poor state of science, COMPare, statins, reproducibility and transparency. I’ll post a big backlog of interviews, and papers, over the next week or two. So, come, come, I’ll see you in Oz! Read the rest of this entry »
The Cabinet Office has come up with a crazy plan to ban academics like me from talking to politicians and civil servants. In this piece I explain why that is an almost surreally stupid idea. I also describe how I hustle, in Whitehall, to try and get government policy changed on open data, scientific transparency, and evidence based policy. Readers with a weaker constitution should be forewarned that this piece contains lurid descriptions of very positive experiences I have had with Oliver Letwin, and other popular right-wing hate figures. There is no apology for that, in the name of pragmatism and democracy: we should train academics to talk to ministers, and encourage them to do it.
Here’s a strange thing, a seedy curio rather than a massive scandal, but I’d be interested to know what you make of it. This week lots of academics all received the same unsolicited marketing email from a large well known research company called Cyagen, who make transgenic mice, stem cells, and so on. The email was headed “Rewards for your publications”. In it, Cyagen make a rather strange offer: “We are giving away $100 or more in rewards for citing us in your publication!”.
The business model is very specific: if you cite them in an academic paper then you get $100, multiplied by the Impact Factor of the journal (a widely used measure of the journal’s influence). So if you cite them in the New England Journal of Medicine, which has an impact factor of 56, then you will receive $5600 from Cyagen. If you cite them in the British Medical Journal, you get $1700. And so on. Read the rest of this entry »
Me and a dozen other academics all just wrote basically the same thing about Open Science in the Journal Of Clinical Epidemiology. After the technical bits, me and Tracey get our tank out. That’s for a reason: publishing academic papers about structural problems in science is a necessary condition for change, but it’s not sufficient. We don’t need any more cohort studies on the global public health problem of publication bias; we need action, of which the AllTrials.net campaign is just one example (and as part of that, we do still need many more audits giving performance figures on individual companies, researchers and institutions, as I explain here). We have a paper coming shortly on the methods and strategies of the AllTrials campaign that I hope will shed a little more light on this, because policy change for public health is a professional activity, not a hobby. Where academics are sneery about implementation, problems go unsolved, and patients are harmed.
Ironically all these papers on Open Science are behind an academic paywall. The full final text of our paper is posted below. If you’re an academic and you’ve ever wondered whether you’re allowed to do this, but felt overwhelmed by complex terms and conditions, you can check every academic journal’s blanket policy very easily here.
And lastly, if you’re in a hurry: the last two paragraphs are the money shot. Enjoy.