A quick fix would stop drug firms bending the truth

February 26th, 2008 by Ben Goldacre in bad science, big pharma, hiding data, regulating research, systematic reviews | 44 Comments »

It’s not just about Prozac. Our failure to properly regulate testing in the pharmaceutical industry has devastating costs

Ben Goldacre
The Guardian,
Wednesday February 27 2008

Yesterday the journal PLoS Medicine published a study which combined the results of 47 trials on some antidepressant drugs, including Prozac, and found only minimal benefits over placebo, except for the most depressed patients. It has been misreported as a definitive nail in the coffin: this is not true. It was a restricted analysis [see below] but, more importantly, on the question of antidepressants, it added very little. We already knew that SSRIs give only a modest benefit in mild and moderate depression and, indeed, for some time now, the NICE guidelines themselves have actively advised against using them in milder cases since 2004.

But the real story goes way beyond the question of Prozac. This new study — published, ironically, in an open access journal — tells a fascinating story of buried data, and of our collective failure, as a society, over half a century, to adequately regulate the colossal $550bn pharmaceutical industry.

The key issue is simple. In any situation, to make any kind of sensible decision about which treatment is best, a doctor must be able to take into account all of the available information. But drug companies have repeatedly been shown to bury unflattering data.

Sometimes they bury data which shows drugs to be actively harmful. This happened in the case of Vioxx and heart attacks, and SSRIs and suicidal thoughts. Such stories feel, intuitively, like cover ups. But there are also more subtle issues at stake, in the burying of results showing minimal efficacy, and these have only been revealed through the excellent investigative work of medical academics.

One example came just last month. As I reported at the time, a paper in the New England Journal of Medicine dug out a list of all trials on SSRIs which had ever been registered with the Food and Drug Administration, and then went to look for those same trials in the academic literature. There were 37 studies which were assessed by the FDA as positive and, with a single exception, every one of those positive trials was written up, proudly, and published in full. But there were also 33 studies which had negative or iffy results and, of those, 22 were simply not published at all — they were buried — while 11 were written up and published in a way that portrayed them as having a positive outcome.

The new study, published this week, has analysed all of the data from the FDA, using the Freedom of Information Act to obtain the results for some of the trials. That medical academics should need to use that kind of legislation to obtain information about trials on pills which are prescribed to millions of people is absurd. More than that, it breaks a key moral contract between patient and researcher.

When a patient agrees to participate in a clinical trial, they give their consent on the understanding that their information will be used to increase the sum of our knowledge about treatments, to ensure that other people, in the future, will be treated more effectively. Burying unwelcome results is an unambiguous betrayal of their trust and generosity.

And yet we have known about this happening for a long time. The first paper describing “publication bias” — where studies with negative results tend to get forgotten — was in 1959. And there are two very simple and widely accepted solutions, which have been discussed in the academic literature at length since the 80s, but which are still not fully in place.

The first is obvious. Nobody should get ethical approval to perform a clinical trial unless there is a clear undertaking that the results will be published, in full, in a publicly available forum, and that the researchers will have full academic freedom to do so. Any company trying to silence academics should be named and shamed, and even attempting to do so should be a regulatory offence.
That’s the butch solution. But there is also a more elegant one, which is arguably even more important: a compulsory international trials register. Give every trial an ID number, so we can all see that a trial exists, they can’t go quietly missing in action, and we know when and where to look if they do.

The pharmaceutical industry is very imaginative, after all, and registers also help to manage some of the other less obvious ways in which they distort the literature. For example, sometimes companies will publish flattering data two or three times over, in slightly different forms, as if it came from different studies, to make it look as if there are a lot of different positive findings out there: registers make this instantly obvious.

Worse than that, companies often move the goalposts, and change the design of a trial after the results are in, to try and massage the findings. This, again, is impossible when the protocol is registered before a trial begins.

This is just a taste of the tricks of their trade (although I’ve posted a long reading list below if your interest is piqued). Alongside these deep-rooted, systematic problems with the pharmaceutical industry, the single issue of SSRI antidepressants, and these new findings, becomes almost trivial. Biased under-reporting of clinical trials happens in all areas of medicine. It wastes money, and it costs lives. It is unethical, and it is indefensible. But most damningly of all, it could be fixed in a legislative trice.

Ben Goldacre is a medical doctor and writes the Bad Science column in the Guardian. His book Bad Science will be published by 4th Estate later this year.

Reading List:

If this kind of thing interests you I heartily recommend the genuinely brilliant book Testing Treatments by Iain Chalmers, Hazel Thornton, and Imogen Evans. It’s written specifically for a lay audience.


It’s nice to have a book, but the whole thing can also be downloaded unabridged (in English and also in Arabic, helpfully) from here:


For something more advanced, or rather, if you prefer things which are closer to textbooks, then How to Read a Paper: The Basics of Evidence-Based Medicine by Prof Trisha Greenhalgh is a highly readable, industry standard, medical student text. As you are aware, the whole of badscience.net is really just an excuse to bring the joys of evidence based medicine and primary academic literature to the masses, through bitchiness and sarcasm.

Don’t forget that the study that this whole story was about is published in an open access journal so you can read the full text for free.


The NICE Guidelines say (since 2004):

“Antidepressants in mild depression
• Antidepressants are not recommended for the initial treatment of mild depression, because the risk–benefit ratio is poor.”

One more thing:

It seems to me that the media walk around with big sticky labels marked “good” and “bad”. This meta-analysis is a fascinating bit of work, and it tells a damning story about the pharmaceutical industry’s burying data, but it has also been ridiculously misreported, in the first day of its life.

I’m not going to list all the errors here – perversely – because I know a lot of journalists read this blog, and I want them all to get their stories as wrong as possible so I have the option of writing about them this weekend. Oh, curse my conscience:

1. It was not a study of SSRI antidepressant drugs: neither nefazodone nor venlafaxine are SSRI drugs.

2. It did not look at all the trials ever done on these drugs: it looked only at the trials done before the drugs were licensed (none of them more than six weeks long), and specifically excluded all the trials done after they were licensed. It is common for quacks and journalists to think that the moment of licensing is some kind of definitive “it works” stamp of approval. It’s not, it’s just the beginning of the story of a drugs’ evidence, usually.

3. It did not show that these drugs have no benefit over placebo: it showed that they do have a statistically significant (“measurable”) benefit over placebo, but for mild and moderate depression that benefit was not big enough for most people to consider it clinically significant, ie there was an improvement, but not enough points improvement on a depression rating scale for anyone to get too excited over it.

4. I could go on.

And remember kids, placeboes are amazingly powerful.

If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

44 Responses

  1. ralaven said,

    February 27, 2008 at 4:20 am

    It’s amazing how it has been presented as something unexpected and new when really all it has shown is that antidepressants are best kept for severe depression – which we knew already.

    If we look at the studies the other way round, i.e the anti-depressants are the gold standard we’re trying to reach, as is often the case in clinical trials, then they clearly show that we can get similar improvements with placebo, without the side effects. The only problem is how we give the placebo.

  2. jackpt said,

    February 27, 2008 at 4:41 am

    To be fair to the hacks, and I hate to be, in the editors summary (background) on the linked PloS Medicine page it says:

    SSRIs are the newest antidepressants and include fluoxetine, venlafaxine, nefazodone, and paroxetine.

    Which is wrong, as you point out above. I think you’ll do a better job summarising the paper than the editors summary on that page.

  3. NeilHoskins said,

    February 27, 2008 at 8:23 am

    Whenever people have a go at the drug companies, I reflect on the years I wasted on ‘talking therapies’, and all the GPs telling me that my symptoms were a reaction to what was happening in my life rather than the other way around. Then, one day, a twelve-year-old locum put me on Prozac and I haven’t looked back since. From where I’m sitting, doctors and GPs look pretty thick: I got my life back thanks to Eli Lilly and the twelve-year-old.

  4. Diotima said,

    February 27, 2008 at 9:45 am

    Ben: I presume that if you had a severely clincally depressed patient you would not tell the patient to book an appointment with Derek Draper, or Darian Leader? Or perhaps to try a full Kleinian analysis? If the use of SSRIs were to be confined to the treatment of the severely depressed then there would be no story. Even this meta- analysis accepts that placebos have little effect on the very severely depressed. Of course letting Darian Leader introduce this piece is an example of shameless and intentional bias by the G2 editor.

  5. drunkenoaf said,

    February 27, 2008 at 9:46 am

    Yes, but Neil, were you severely depressed (when SS(N)RIs are more effective)? Ben has just said that these do show some efficacy in mildly depressed patients– just not much. “And remember kids, placeboes are amazingly powerful”.

  6. frontierpsychiatrist said,

    February 27, 2008 at 10:26 am

    There are two other good books on sharp practices in by the pharmaceutical industry.

    ‘Is Psychiatry for sale’ is an exploration of how psychiatrists have been manipulated and co-opted by the pharmaceutical industry


    ‘Big Pharma’ by Jackie Law talks about profits made (massive), dubious practices (both in research and their role in the creation of new diseases)


    One more thing to say is that ther is a lot of talk about mild/moderate/severe depression, without any mention of where these distinctions come from. They’re classifications from the ICD-10, and denote number and duration of symptoms leading to a diagnosis of depression. If we say that these drugs actually only have an effect on those with ‘Severe depression’ then we mean a relatively small number of people with depression so severe that they’ve probably stopped going to work and may have poor self care and stopped eating etc. Generally speaking this is not the sort of person these drugs have been marketed at all these years.

    Can the NHS ask for its money back?

  7. NeilHoskins said,

    February 27, 2008 at 10:40 am

    What does “severely depressed” mean? To classify depression into “mild”, “moderate”, and “severe” is, fairly typically of the medical profession, simplistic nonsense.

  8. RS said,

    February 27, 2008 at 11:07 am

    Well in the case of this study it means Hamilton rating scale for depression score – so it isn’t nonsense at all.

  9. Arkadyevna said,

    February 27, 2008 at 11:15 am

    Even putting aside the factual innaccuracies, I found the reporting of this story very irresponsible in many quarters.

    I heard several news bulletins on different radio stations (Radio 1, Radio 4, Classic FM) that informed me how inneffective anti-depressants are without adding a caveat that anyone taking them should not stop them without consulting their doctor.

    Even if their efficacy is questionable, I don’t think the effects of sudden withdrawal are. It’s not hard to imagine someone hearing that and thinking there’s no point carrying on with their medication, especially if they are suffering any side effects.

  10. RS said,

    February 27, 2008 at 11:16 am

    Something that is interesting here is that a Cohen medium effect (NICE’s criteria for its guidance on treating depression) has somehow been given a brand new interpretation and significance following this study such that anything with less than a medium effect size is now considered by the media, commentators, and ultimately the public, as being no difference at all.

    I wonder what people would say if we now retroactively declared that the anti-cholinesterase drugs therefore don’t work at all since they have an effect size below Cohen’s medium level. No need to worry about cost-effectiveness, they don’t work by our brand new but suddenly widely accepted arbitrary criterion.

  11. NeilHoskins said,

    February 27, 2008 at 11:41 am

    @RS “…it isn’t nonsense at all…”. Wow, you guys really are sure of yourselves, aren’t you? Does that make the placebos work better, perhaps? OK, I’ll write on my blog a list of all the ineptitudes and incompetencies my wife & I have suffered from the medical profession over the years; this isn’t the place. And yes, we’re both well now, thank you very much, because of the drugs we’re on.

  12. ulysses2031 said,

    February 27, 2008 at 12:48 pm

    The Independent suggests here



    ‘…the Government said it had been told that compelling the industry to publish trial data would not be allowed and it was instead pursuing a voluntary approach, developing a “searchable register” of all trials that have taken place in the UK and pressing the EU to make its own confidential register public.’

    Does anyone have any idea what ‘would not be allowed’ means in this context?

  13. Wonko said,

    February 27, 2008 at 4:57 pm

    The Independent seems to be the only news outlet to pick up the failures of government here. Nu Labour made forcing Big Pharma to publish their data a manifesto commitment in 2005. A cynic might wonder if this is the same Nu Labour that receives a fortune in donations toward its electioneering from Big Pharma.

  14. chai wallah said,

    February 27, 2008 at 6:44 pm

    “I heard several news bulletins on different radio stations (Radio 1, Radio 4, Classic FM) that informed me how inneffective anti-depressants are without adding a caveat that anyone taking them should not stop them without consulting their doctor.”

    Radio 4 news by 6pm had added clear advice to people to not stop taking antidepressants they had been prescribed, and to talk to their doctor if they had any concerns. More of the journalistic coverage ought to have been this responsible.

  15. Anon_Academic said,

    February 27, 2008 at 8:45 pm


    A “statistically significant” difference between placebo and antidepressant does not mean it is a “measurable” difference, as said above.

    That is “Bad Statistics.”

    A statistically significant result means that if it was true that Placebo=Antidepressant, there’s less than a 5% chance that the data would exist.

    That’s all it means. Period. And that’s why clinical significance is so important- if you enroll 300 people in an antidepressant study, even a very tiny difference between placebo and antidepressant will be “statistically significant.”

    See, Any Undergraduate Statistics Book

  16. RS said,

    February 27, 2008 at 9:08 pm

    “A “statistically significant” difference between placebo and antidepressant does not mean it is a “measurable” difference, as said above.”

    What does that mean? Obviously the difference is measurable in this study since it is quantified, and is in fact a Cohen ‘small’ effect overall, rising to ‘medium’ for severe depression.

    The study says:

    “weighted mean improvement was 9.60 points on the HRSD in the drug groups and 7.80 in the placebo groups, yielding a mean drug–placebo difference of 1.80 on HRSD improvement scores…the standardized mean difference, d, mean change for drug groups was 1.24 and that for placebo 0.92…Thus, the difference between improvement in the drug groups and improvement in the placebo groups was 0.32, which falls below the 0.50 standardized mean difference criterion that NICE suggested.”

    Where the NICE criterion is Cohen’s classification of a ‘medium’ effect size (an effect over .2 being classified as ‘small’, and over .8 being called ‘large’).

    Clinical significance is indeed important, but you have to both state what clinical significance is, and how you came to that conclusion.

  17. drewaight said,

    February 27, 2008 at 9:19 pm

    Every single registered clinical trial outcome should be available to the public. Enough said.

    I don’t personally have any affiliation with the pharmaceutical industry and truth be told I am enjoying just a bit of schadenfreude at watching them go down the tubes lately.

    On the other hand I think that’s incredibly myopic of anyone to assume that the drug discovery industry should somehow be beholden to the same standards as acaedemic scientists or the Hippocratic oath or other such nonsense. This is America people, we eat sleep and bleed capitalism. Need I remind anyone just how much money in time and research it costs to develop a drug and get it through the FDA. Is anyone honestly surprised that pharmaceutical companies try to suppress negative data after they have potentially sunken hundreds of millions of dollars taking a therapeutic through development? And the return on the mind-blowing amount of investment is constantly under attack by members of the public that are incapable of understanding that 99% of the cost of their pill goes into recouping the losses from those other 100 pills that didn’t make it to market. It seems like because its connected to medicine the public expects the drug development industry to somehow conjure up capital out of thin air then give away the products for the good of mankind.
    I hate to point it out but that notion is totally un-american. If the government regulates the bottom-line return of the pharmaceutical industry without protecting the intellectual property or otherwise adding incentive, the industry will go down, then we can all buy our therapeutics from nutritional supplement makers.

    The business of medicine sucks, there’s something inherently wrong with putting dollar signs on diseases. Furthermore, since the public expects some kind of level of academic integrity in the business of therapeutics, perhaps our great nation should divert a tiny fraction of its defense budget(which for some reason never invokes the public ire), and dedicate those funds to discovering and/or validating therapeutics. As such “Big Pharma” can be reduced to the effective investment-banks specializing-in-healthcare which they are best at, and which for some reason are never the object of serious public scrutiny.

  18. inspiros said,

    February 27, 2008 at 11:10 pm

    Poor reporting of the study. On the one hand excellent to see this make headline news – and seeing Professor Irving Kirsch on the BBC was a delight – as I’ve admired his work for a long time.

    On the other hand real concerns that people will stop taking their medication.
    Although few seemed concerned that the placebo effect would now be deflated due to publication of this story.

    I can imagine some very interesting conversations taking place over the next weeks….

    Patient: I’ve stopped taking them because the study says they don’t work.
    Doctor: I know you think this doesn’t work anymore because of that story but it DOES work. It works because you believe it works. Even if you’ve stopped believing it works.

    The study doesn’t say that anti-depressants don’t work. It says that they work because we expect them to work. And the side effects make them a very active, and therefore effective, placebo.

    The problem, as stated by ralaven, is “how do we give effective placebo”.

    Given the impact of placebo (err… can we have a competition to come up with a better name than that? Isn’t that a good column idea Ben?) – it is a travesty that research and training are not spent in this area. What if the doctor-patient communications were all optimized such that expectancy of positive results was increased – and all patients respond to treatments better.

    I’m surprised that no-one seems to have acknowledged the level of work that Kirsch has been doing in this area – probably the leading expert on placebo, expectancy and… hypnosis (err… what???)


    Ralaven wonders “how do we give effective placebo”.
    Kirsch himself calls hypnosis “a non-deceptive mega placebo” (Clinical Hypnosis and Self-Regulation, ; Kirsch et al, 1998).

    There is your answer ravalen.

  19. ralaven said,

    February 27, 2008 at 11:30 pm

    I can’t get the Kirsch article – any one have access to an abstract or more? I still don’t see how giving someone something that you know works only because you’ve persuaded the patient that it does is ‘non-deceptive’, unless it’s the practitioner who’s been hypnotised to believe it.

    My other problem with the reports is that they always discuss changing from drugs to talking therapy. That has nothing to do with this study which doesn’t look at the effectiveness of other treatment modalities. Has a similar study been done looking at talking therapy against placebo pills, or even SSRIs? I suspect not and I wouldn’t be surprised to discover that if there were the results would be similar.

    It’s nice to see the great god of statistical sugnificance being taken down a peg or too. With all biological research, but particularly medicines, you have to have an effect that is biologically, or clinically, significant, and the trial should be designed to find that effect. You set the power of the study so that it can find a biologically important result.

  20. Michelle Disraeli said,

    February 28, 2008 at 1:12 am

    I had a flick through the study in question, and I couldn’t find the one thing I was looking for.

    Anyone who has read into homeopathy trials has probably read at some point the BMJ article on how to conduct a meta-analysis, which describes a particular sort of graph that compares efficacy with study sizes, and details how to correct for bias in the sample study set. As it’s rather late, you will forgive me for not looking up the study reference and all the proper terminology.

    Having read that now-highly-cited guide in the BMJ, the immediate thing I looked for in this PLoS Medicine meta-analysis was that very same style of graph. Not being well versed in medical papers, I could well have missed it, but it did seem to be completely absent. To me, this raises big questions about the accuracy of the analysis.

    Of course, the worst thing about all of this has been the frankly dangerous media coverage of the subject 🙁

  21. psybertron said,

    February 28, 2008 at 3:45 am

    Two points Ben,

    This time unequivocally I say a great piece. No caveats. Good to see that deep background behind the published news story.

    One additional thought on these various “bias” effects in conducting and publishing research, is that in cases where it becomes impractical (sample sizes / population distrubution reasons) to remove all bias, that the values / interests implicit in any bias are made explicit. (In the same way as some of the value-based bias arises in identifying and funding specific research opportunities in the first place.)

    Interesting too to see the comments about the placebo downside of bad-publicity on the actual drugs. Ironic given that the psychological effects are presumably particularly significant for a condition that has real psychological aspects to start with.

  22. profnick said,

    February 28, 2008 at 8:15 am

    Wonder why it took almost a year from submission of the paper to acceptance? Were there lots of changes required?

  23. csrster said,

    February 28, 2008 at 8:37 am

    “This is America people”

    It is ?

  24. RS said,

    February 28, 2008 at 9:53 am

    Michelle – were you after forest or funnel plots of the study? I’ve put a few figures up here that may be of some interest.

  25. used to be jdc said,

    February 28, 2008 at 1:22 pm

    Re Wonko’s comment here: New Labour have probably broken lots of manifesto pledges. I doubt that they would have needed any encouragement from Big Pharma to drop that particular one. I think that political parties in general are more than willing to drop manifesto pledges and break promises that no longer suit.

    One example is Labour’s unfulfilled 1997 manifesto pledge to hold a referendum on electoral reform. Pre-election they made a promise to change the system. Post-landslide victory they changed their minds.

  26. Grathuln said,

    February 28, 2008 at 1:22 pm

    While I agree “big parma” needs to get its house it order with respect to drug trials. It is important to know that Scientology is involved in a serious war against the psychiatric profession. Those who already know may think it a joke, but it is a highly organised and well-funded operation; see video link.

    (8 mins footage from scientologies 2006/2007 review)

    You can read about it here:


    or simply google: “scientology psychiatry war”

    Their intent is not to provide an alternative to psychiatry and regular psychotherapy but to replace it with dianetics and scientology. In the words of their leader, David Miscaviage it is a campaign for the “world wide obliteration of Psychiatry”.

    A worldwide protest against scientology took place on the 10th of Feb 2008 organised on the Internet by Netcitizens concerned by scientology’s insidious practices. This brought around 7000 people to the streets in around 150 locations around the world, 500 in London alone.

    Another protest is being organised for the 15th March 2008.

    Google: anonymous scientology for details. Anonymous is no longer just a small group of hackers. Scientology is not a joke or a cult to be ignored. The Germans are right to be concerned by it and so should any one concerned about slipping into the age of endarkenment.

  27. RS said,

    February 28, 2008 at 3:20 pm

    If anyone’s interested I’ve also put up an analysis of final Hamilton depression scores instead of standardised mean differences here.

    Interestingly paroxetine and venlafaxine exceed the NICE criteria when analysed like this.

  28. inspiros said,

    February 28, 2008 at 3:24 pm


    1/ Kirsh quote is from his book not an online article. Google “non-deceptive placebo” – it’s all Kirsch.

    2/ Certainly there have been many meta-analyses done on CBT – and most therapies. Kirsch himself has done several (interestingly comparing CBT alone with CBT combined with hypnosis – with a significant increase in effectiveness – I think the treatments were for obesity.)

    The American Psychological Association has their list of Empirically Validated Treatments (led by Diane Chambless et al) – and of course NICE too in their own way.

    I don’t know whether the same sort of publication bias has happened with “talking therapies” as with drugs.
    Certainly I don’t think you are going to need to go to The Freedom of Information Act to get details of the studies (however I may be wrong.)
    Placebo control also becomes somewhat different – but you do not have the same “active placebo” side-effect issues.

    3/ You don’t need to give a placebo pill to incur the placebo effect. Just engage in the sustained partnership model with a view to.. e.g. recommending lifestyle changes.

    “Perhaps the best single concept of how to structure one’s interactions to maximize the placebo response is that of the “sustained partnership” between physician and patient.
    Sustained partnership consists of 6 physician characteristics:

    * Interest in the whole person
    * Knowing the patient over time
    * Caring, sensitivity, and empathy
    * Viewed as reliable and trustworthy by the patient
    * Adapts medical goals of care to the patient’s needs and values
    * Encourages the patient to participate fully in health decision making

    These 6 elements were selected because empirical studies link them to superior health outcomes in practice settings.(Leopold N, Cooper J, Clancy C. Sustained partnership in primary care. J Fam Pract 1996; 42:129-37.)”

    Excerpt from: The Placebo Response; Journal of Family Practice, July, 2000 by Howard Brody.

    Problem is… with 8 minute average consultations it’s a bit hard.

  29. Michelle Disraeli said,

    February 28, 2008 at 10:27 pm

    RS – Thank you, the funnel plot was exactly what I was hoping to see! Your further analysis is very impressive and useful, well done!

    Thom – If I understand you correctly, you are suggesting that the sole use of the funnel plot is to indicate ‘missed’ studies. Certainly in the paper I referred to, that was their application. But the exact same methodology, as I understand it, is also an indication of study quality, and a useful means to present data on the spread of conclusions based upon study sizes. Given software tools can easily produce these plots, I can see little reason to exclude them.

  30. ralaven said,

    February 29, 2008 at 6:35 pm

    Inspiros – I think you missed my point re CBT – probably because I didn’t express myself very well. As is well known the data for CBT shows that it is as effective as antidepressants (RCPsych website) so what I was asking was whether CBT had been evaluated using these strict criteria. The presentation of the data has suggested that because antidepressants don’t work we should be switching to CBT, when the published data would suggest that using the same criteria that CBT would also be found to be no better than placebo.

  31. allshallbewell said,

    March 1, 2008 at 7:42 pm

    Cohen’s d is the difference between means divided by the pooled SD (or the SD of one of the groups depending on who you read). The d used in this study was the difference between means divided by the SD of the difference score. So the criterion of ‘0.5 = medium/clinically interesting’ might not apply after all…

  32. thom said,

    March 1, 2008 at 8:00 pm

    allshallbewell: The criterion ‘0.5 = medium/clinically interesting’ is just plain silly. A criterion for clinical significance shouldn’t depend on the variability of the measurement (because that depends on factors of clinical importance) and shouldn’t be identical for all clinical needs. Using the raw HSRD is preferable.

    Cohen originally suggested using the control/baseline group SD, but the pooled SD tends to be used if it seems likely that the groups have equal or near equal variance of if there isn’t an obvious control/baseline. Arguably, for comparison of differences once should use the original control group SD – not that of the differences – to keep all the effects (drug, placebo, and drug-placebo) on a similar scale.

  33. allshallbewell said,

    March 1, 2008 at 8:23 pm

    Thom: Yes – I agree that the medium effect size shouldn’t be used in this way.

    But my point was that the effect size used in this study wasn’t Cohen’s d. The SD of the difference score is not the same as the pooled SD, or the SD of one of the groups.

  34. RS said,

    March 1, 2008 at 9:55 pm

    You can have a Cohen d for change scores, it does make sense.

    But the effect size in this study seems to actually be the difference between the standardised mean difference between baseline and outcome for each group i.e. one SMD minus the other SMD, rather than the mean HRSD change minus the other mean HRSD change.

  35. RS said,

    March 1, 2008 at 11:50 pm

    Therefore their d score is not a real d but a difference in SMDs

  36. allshallbewell said,

    March 2, 2008 at 12:03 am

    RS: I’m not sure that the d you get from change scores is called Cohen’s d, even if it does make sense. But anyway:

    (a) Cohen’s d is the difference in means/pooled SD (or SD of the control group). So d=0.5 indicates that the mean of one group is 0.5SD higher than the other. Some might call this “interesting”.

    (b) If d is calculated from change score/SD of change score, then d=0.5 doesn’t mean the same thing, mainly because the SD of the change score depends on the correlation between baseline and outcome scores.

    As you point out, this study calculates (b) for the treatment group and the control group for each study, then subtracts them. As far as I can tell, this gives a d equivalent to (a) only if the SDs of the two groups are equal AND the correlation between baseline and outcome scores is r=0.5 for both groups.

    So this seems a very silly way to go about it: you can’t really compare these ds within one study, let alone across several. And I think the NICE guidelines refer to (a) not (b), but I can’t be sure.

  37. RS said,

    March 4, 2008 at 6:27 pm

    Oh, I agree, I just mean that you can calculate a Cohen d score on raw HRSD change scores (as I do here with a Hedges adjusted g).

    But, as you say, they didn’t do that, even though they had the data to do it, and instead looked at the difference between SMDs between the groups – I don’t even want to attempt to try and interpret what that measure is supposed to mean, and I’m certainly not sure it should be interpreted as being the same as Cohen’s d.

    Ah, crap, looking at the paper they report:

    “We conducted two types of data analysis, one in which each group’s change was represented as a standardized mean difference (d), which divides change by the standard deviation of the change score (SDc) [10]”

    But that, as you point out, isn’t a proper SMD (it isn’t even a Z-score). But, the paper referenced is about the Hedges g! I’m confused. If we look at their Table 1 we can see that calculating the standard deviation from the 95% confidence intervals of the ‘d’ gives us values of the same order as ‘d’ itself (e.g. study ‘ELC 19’, ‘d’=1.44, SD=1.47), dividing the raw change score by ‘d’ gives us 8.7.

    ‘d’ is supposed to be the raw change divided by its SD (SDr). We can calculate the SD of ‘d’ from the 95% confidence interval (SDd). So in theory if we multiply ‘d’ and SDd by the SDr we should get the raw change and its standard deviation (SDr). So the SDd must be 1, which it isn’t.

    So ‘d’ can’t be the raw change divided by SDr, but is something else. I’m guessing it really is d or some variant.

  38. ayupmeduck said,

    March 6, 2008 at 5:29 pm

    Not sure exactly what tighter trial laws mean in this case:


  39. emilypk said,

    March 6, 2008 at 6:50 pm

    Witness an utter failure to understand the difference between a negative result and a failed experiment: www.null-hypothesis.co.uk/science/strange-but-true/joking/bizarre_journal_of_negative_results

  40. Ben Goldacre said,

    April 5, 2008 at 7:27 pm

    mm, i wish i had noticed this discussion developing earlier. i’m not very impressed by people handing out medical advice to strangers online, and without wanting to be critical, i’ve a strong preference for people not discussing their personal medical problems here, largely because of that. ho hum.


  41. diohdan said,

    May 27, 2008 at 2:42 am

    Current Depression Medications: Do The Benefits Outweigh the Harm?

    Presently, for the treatment of depression and other what some claim are mental disorders, as they are questionable, selective serotonin reuptake inhibitors are the drugs of choice by most prescribers. Such meds, meds that affect the mind, are called psychotropic medications. SSRIs also include a few meds in this class with the addition of a norepinephrine uptake inhibitor added to the SSRI, and these are referred to SNRI medications. Examples of SNRIs are Cymbalta and Effexor. Some consider these classes of meds a next generation after benzodiazepines, as there are similarities regarding their intake by others, yet the mechanisms of action are clearly different, but not their continued use and popularity by others.

    Some Definitions:

    Serotonin is a neurotransmitter thought to be associated with mood. The hypothesis was first suggested in the mid 1960s that this neurotransmitter may play a role in moods and emotions in humans. Yet to this day, the serotonin correlation with such behavioral and mental conditions is only theoretical. In fact, the psychiatrist’s bible, which is the DSM, states that the definite etiology of depression remains a mystery and is unknown. So a chemical imbalance in the brain is not proven to be the cause of mood disorders, it is only suspected with limited scientific evidence. In fact, diagnosing diseases such as depression is based on subjective assessment only, as interpreted by the prescriber, so one could question the accuracy of such diagnoses.

    Norepinephrine is a stress hormone, which many believe help those who have such mood disorders as depression. Basically, with the theory that by adding this hormone, the SSRI will be more efficacious for a patient prescribed such a med.

    And depression is only one of those mood disorders that may exist, yet possibly the most devastating one. An accurate diagnosis of these mood conditions lack complete accuracy, as they can only be defined conceptually, so the diagnosis is dependent on subjective criteria, such as questionnaires. There is no objective diagnostic testing for depression. Yet the diagnosis of depression in patients has increased quite a bit over the decades. Also, few would argue that depression does not exist in other people. Yet, one may contemplate, actually how many other people are really depressed?

    Several decades ago, less than 1 percent of the U.S. populations were thought to have depression. Today, it is believed that about 10 percent of the populations have depression at some time in their lives. Why this great increase in the growth of this condition remains unknown and is subject to speculation. What is known is that the psychiatry specialty is the one specialty most paid to by certain pharmaceutical companies for ultimately and eventual support of their psychotropic meds, as this industry clearly desires market growth of these products. Regardless, SSRIs and SRNIs are the preferred treatment methods if depression or other mood disorders are suspected by a health care provider. Yet these meds discussed clearly are not the only treatments, medicinally or otherwise, for depression and other related disease states.

    Over 30 million scripts of these types of meds are written annually, and the franchise is around 20 billion dollars a year, with some of the meds costing over 3 dollars per tablet. There are about ten different SSRI/SRNI meds available, many of which are now generic, yet essentially, they appear to be similar in regards to their efficacy and adverse events. The newest one, a SNRI called Pristiq, was approved in 2008, and is believed to being promoted for treatment for menopause. The first one of these SSRI meds was Prozac, which was available in 1988, and the drug was greatly praised for its ability to transform the lives of those who consumed this medication in the years that followed. Some termed Prozac, ‘the happy pill’. In addition, as the years went by and more drugs in this class became available, Prozac was the one of preference for many doctors for children. A favorable book was published specifically regarding this medication soon after it became so popular with others.

    Furthermore, these meds have received additional indications besides depression for some really questionable conditions, such as social phobia and premenstrual syndrome. With the latter, I find it hard to believe that a natural female experience can be considered a treatable disease. Social phobia is a personality trait, in my opinion, which has been called shyness or perhaps a term coined by Dr. Carl Jung, which is introversion, so this probably should not be labeled a treatable disease as well. There are other indications for certain behavioral manifestations as well with the different SSRIs or SRNIs. So the market continues to grow with these meds. Yet, it is believed that these meds are effective in only about half of those who take them, so they are not going to be beneficial for those suspected of having certain medical illnesses treated by such meds. The makers of such meds seemed to have created such conditions besides depression for additional utilization of these types of medications, and are active and have been active in forming symbiotic relationships with related disease- specific support groups, such as providing financial support for screenings for the indicated conditions of their meds- screening of children and adolescents in particular, I understand, and as a layperson, I consider such activities dangerous and inappropriate for several reasons.

    Danger and concerns by others primarily involves the adverse effects associated with these types of meds, which include suicidal thoughts and actions, violence, including acts of homicide, and aggression, among others, and the makers of such drugs are suspected to have known about these effects and did not share them with the public in a timely and critical manner. While most SSRIs and SNRIs are approved for use in adults only, prescribing these meds to children and adolescents has drawn the most attention and debate with others, such as those in the medical profession as well as citizen watchdog groups. The reasons for this attention are due to the potential off-label use of these meds in this population, yet what may be most shocking is the fact that some of the makers of these meds did not release clinical study information about the risks of suicide as well as the other adverse events related to such populations, including the decreased efficacy of SSRIs in general, which is believed to be less than 10 percent more effective than a placebo. Paxil caught the attention of the government regarding this issue of data suppression some time ago, this hiding such important information- Elliot Spitzer specifically, as I recall.

    And there are very serious questions about the use of SSRIs in children and adolescents regarding the effects of these meds on them. For example, do the SSRIs correct or create brain states considered not within normal limits, which in effect could cause harm rather than benefit? Are adolescents really depressed, or just experiencing what was once considered normal teenage angst? Do SSRIs have an effect on the brain development and their identity of such young people? Do adolescents in particular become dangerous or bizarre due to SSRIs interfering with the myelination occurring in their still developing brains? No one seems to know the correct answer to such questions, yet the danger associated with the use of SSRIs does in fact exist. It is observed in some who take such meds, but not all who take these meds. Yet health care providers possibly should be much more aware of these possibilities.

    Finally, if SSRIs are discontinued, immediately in particular instead of a gradual discontinuation, withdrawals are believed to be quite brutal, and may be a catalyst for suicide in itself, as not only are these meds habit forming, but discontinuing these meds, I understand, leaves the brain in a state of neurochemical instability, as the neurons are recalibrating upon discontinuation of the SSRI that altered the brain of the consumer of this type of med. This occurs to some degree with any psychotropic med, yet the withdrawals can reach a state of danger for the victim in some classes of meds such as SSRIs, it is believed.
    SSRIs and SRNIs have been claimed by doctors and patients to be extremely beneficial for the patient’s well -being regarding the patient’s mental issues where these types of meds are used, yet the risk factors associated with this class of medications may outweigh any perceived benefit for the patient taking such a drug. Considering the lack of efficacy that has been demonstrated objectively, along with the deadly adverse events with these meds only recently brought to the attention of others, other treatment options should probably be considered, but that is up to the discretion of the prescriber.

    “I use to care, but now I take a pill for that.” — Author unknown

    Dan Abshear

  42. xuqunren said,

    November 24, 2009 at 12:23 pm

    ugg boots ugg boots ugg boots
    ugg boots
    ugg boots ugg boots
    ugg boots
    ugg boots ugg boots
    ugg boots
    ugg bootsugg boots
    ugg boots
    ugg bootsugg boots
    ugg boots
    ugg bootsugg boots
    ugg boots
    ugg boots ugg boots
    ugg boots ugg boots ugg boots
    ugg boots ugg boots ugg boots
    ugg boots ugg boots ugg boots
    ugg boots ugg boots ugg boots

  43. xuqunren said,

    November 24, 2009 at 12:24 pm

    Please not copy or modify this article about ugg boots, this article is the original of this ugg boots website.
    ugg boots Forwarding not allowed to modify the article, the article for the original site ugg boots

  44. Delwin Joel said,

    March 2, 2015 at 5:49 am

    To reply to Grathuln’s comment on February 28, 2008, the correct present link for rick’s cult site on that reference is www.culteducation.com/group/1284-scientology.html