Ben Goldacre, The Guardian, Saturday 21 May 2011
Here’s no surprise: beliefs which we imagine to be rational are bound up in all kinds of other stuff. Political stances, for example, correlate with various personality features. One major review in 2003 looked at 38 different studies, containing data on 20,000 participants, and found that overall, political conservatism was associated with things like death anxiety, fear of threat and loss, intolerance of uncertainty, a lack of openness to experience, and a need for order, structure, and closure.
Beliefs can also be modified by their immediate context. One study from 2004, for example, found that when you make people think about death (“please briefly describe the emotions that the thought of your own death arouses in you”) they are more likely to endorse an essay discussing how brilliant George Bush was in his reponse to 9/11.
A new study looks at intelligent design, the more superficially palatable form of creationism, promoted by various religious groups, which claims that life on earth is too complex to have arisen through evolution and natural selection. Intelligent design implies a reassuring universe, with a supernatural creator, and it turns out that if you make people think about death, they’re less likely to approve of a Dawkins essay, and more likely to rate intelligent design highly.
So that’s settled: existential angst drives us into the hands of religion. Rather excellently, after all, the effect was partially reversed when people also read a Carl Sagan essay about how great it is to find meaning in the universe for yourself using science. It’s perfect. I love this stuff. I love social science research that reinforces my prejudices. Everybody does.
But that’s where I start to fall down. If I like these results, then lots of other people will like them too, whether it’s the academic psychologists doing the research, the statisticians they collaborate with, the academic journal editors and reviewers who decide whether or not the paper gets an easy ride into print, the press officers who decide whether or not to shepherd its findings towards the public, or even, finally, the bloggers and journalists who write about it. At every step, there is room for fun results to get through, and for unwelcome results to fall off the radar.
This isn’t a criticism of any individual study. Rather, it’s the angst-inducing context that surrounds every piece of academic research that you read: a paper can be perfect, brilliantly well-conducted, and yet there’s no way of knowing how many negative findings go missing. For all we know, we’re just seeing the lucky times the coin landed heads up.
The scale of the academic universe is dizzying, after all. Our most recent estimate is that there are over 24,000 academic journals in existence, 1.3 million academic papers published every year, and over 50 million papers published since scholarship began.
And for every one of these 50 million papers there will be unknowable quantities of blind alleys, abandoned experiments, conference presentations, work in progress seminars, and more. Look at the vast number of undergraduate and masters dissertations that had an interesting finding, and got turned into finished academic papers: and then think about the even vaster number that don’t.
In medicine, where the stakes are tangible, systems have grown up to try and cope with this problem: trials are supposed registered before they begin, so we can notice the results that get left unpublished. But even here, the systems are imperfect; and pre-registration is very rarely done, even in medical research, for anything other than trials.
We are living in the age of information, and vast tracts of data are being generated around the world on every continent and every question. A £200 laptop will let you run endless statistical analyses. The most interesting questions aren’t around individual nuggets of data, but rather how we can corral it to create an information architecture which serves up the whole picture.
John Chubb said,
May 23, 2011 at 4:28 pm
The problem of unpublished data has been very much in the public eye in relation to climate change. While it would seem a great idea to publish, or make public, all calculations and experimental measurements you make this actually would not be very helpful to the general public or even, probably, other scientists working in the area. For example, you may have made a set of measurements and afterwards realise that a parameter that may have been relevant was not measured or controlled or there was no reliable calibration on a measurement. Also the design of the apparatus used needs to be understood and the measurement procedure used. For peer reviewed published papers there is quite a pressure on the reviewer to judge the honesty of the author’s presentation – and reviewers are not paid! The ultimate test is the ability of other independent workers to come to the same conclusions – but this is difficult for new and not yet popular areas of work or where results require a change in accepted ideas and approaches.
Preliminary information on medical trials may indicate what is planned to do and how, but this is a special area. In much research things change, often radically, during the progress of work – if they did not the work would not be new!
One advantage of research or development in an industrial context is the opportunity to see whether the results ‘work’ in practice – even if the work may not end up being published, for commercial reasons.
Danny Chambers said,
May 23, 2011 at 9:21 pm
Another problem is all the good research that doesn’t get published because nothing exciting or significant was proven.
Although it is a bit boring, once a study or investigation has revealed that there is no link between two factors, or (for a random example) that a particular pathogen/bacteria turns out not to be isolated from a wild vole population, then it is very much less likely to be published anywhere so that knowledge is then effectively “lost”.
mister_arnold said,
May 23, 2011 at 10:56 pm
See also the decline effect – as covered by Jonah Lehrer’s great article in the New Yorker (Dec 13, 2010) www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer
In short, because experimenters tend to pick the ‘best’ results for publication and overlook the randomness inherent in any result, it is highly likely that further retests will show a declining impact. Even if the data is coming from the same distribution. The selection and publication biases present will encourage the dissemination of the overestimated impact studies and, unless the further research is ‘important enough’ they may well be ignored or sidelined.
FlipC said,
May 24, 2011 at 10:05 am
“political conservatism was associated with things like death anxiety, fear of threat and loss, intolerance of uncertainty, a lack of openness to experience, and a need for order, structure, and closure.”
So does that explain the widely held belief that you get more conservative as you get older? Can we get another study showing whether or not these things are prevalent over a certain age compared to under it?
irishaxeman said,
May 24, 2011 at 10:59 am
The widely held belief is just that. Most of our information is actually stereotypes left over from a previous generation or media promotions. Political conservatism and the need for predictability in existence are not the same thing – there is so much conceptual confusion!
You can have neurotic socialists, angst-ridden anarchists and risk-tolerant conservatives (e.g. free marketeers). Life is more complex.
Melbo said,
May 24, 2011 at 6:14 pm
Timor mortis conservat me?
lejstephen said,
May 24, 2011 at 10:22 pm
Just a quick note to say some of us scientists do try and do it properly. We might not highlight our false results but the 1000genomes project (www.1000genomes.org) does try and post practically all our results on our ftp site if people want to dig around. We are also trying to figure out how to present the stuff we considered wrong so people can see if they agree with us
joe22c said,
May 25, 2011 at 5:24 am
mister_arnold said,
“See also the decline effect – as covered by Jonah Lehrer’s great article in the New Yorker (Dec 13, 2010) www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer”
Arnold, I have recently read that piece on the decline effect and I disagree with you; it is most certainly NOT a great article. It skirts dangerously into post-modernism territory.
For a thorough trouncing, see here:
scienceblogs.com/insolence/2010/12/is_the_decline_effect_really_so_mysterio.php
For a tongue-in-cheek condensed rebuke, see this comment:
“The Decline Effect is nothing to worry about: surely it will itself decline.”
andrewwyld said,
May 25, 2011 at 10:21 am
Would it be worth someone standardizing data from different trials into a machine-readable form and letting a neural network loose on the outcomes? Neural networks often find “the wrong correlation” (because of the tendency to overapply Occam’s razor) but actually if we are working from the assumption that we don’t know the right outcome then the Occam-minimal outcome is what we want.
ferguskane said,
May 27, 2011 at 8:32 pm
I love these prejudice confirming results too.
On to sorting the wheat from the chaff, or at least combining the wheat and the chaff..
In the field of my PhD research (neuroimaging), I despair of how badly we report our science. We may correct for the many voxels (3d pixels) that we compare in on between group comparison (or contrast as they are referred to). But we almost never seem to stick to a predefined number of comparisons and we almost never report all the negative (or less interesting or less comprehendible) results of comparisons that were conducted. So even for one published study, we don’t know the extent of the negative publication choices and multiple comparison problem within that study.
Just reading thought I see mister_arnold makes a related point.
My feeling is that we should ALL be forced to register our hypotheses and planned comparisons before we start a study. And there should be a link to our plans in our published papers.
ferguskane said,
May 27, 2011 at 8:52 pm
@joe22c
Having looked at Orac’s ‘thorough trouncing’ of the Leherer article, to me it reads more like a well reasoned criticism that acknowledged the good points of the article, whilst cautioning against alarmist responses and questioning Leherer’s understanding of ‘truth’ . As noted by Orac, many of Leherer’s concerns are well known and accepted in the scientific community.
But just because concerns are well known and accepted in the scientific community (I disagree here), does not mean that we should not be concerned about these issues and try and minimise them. I for one would be much happier remaining in research if I felt the papers I base my work on were an honest reflection of the process that produced them.
peutch said,
May 30, 2011 at 4:25 pm
About the existential findings: correlation is not causation.
One should be beware that the link found between psychological attitude and ideological attitude should not be interpreted as causation (as the 2003 meta-study does).
For the sake of the argument, let’s assume that left-leaning is more unstable than a right leaning country. And that growing up in such country makes you more tolerant to ambiguity and uncertainty.
Moreover, let’s assume that people growing in a left-leaning country are more likely to become left-leaning.
In that case, growing in such a country would make you both more left-leaning, and more tolerant to uncertainty, without the need for a direct causal relationship between the two.
Those findings should be interpreted as correlations more than causation; and you yourself pointed at some possible reverse causation with the Carl Sagan example.
heavens said,
May 30, 2011 at 4:47 pm
@Joe
I was pretty unimpressed with Orac’s “trouncing”. Orac disagrees on some parts, but he makes errors. For example, Orac complains that Lehrer fails to discuss the problem of “popularity”; I can only assume that Orac overlooked the paragraph in the New Yorker article that begins “The situation is even worse when a subject is fashionable…”
But mostly, Orac seems to say, “He’s right, but pseudoscience cranks are going to misinterpret the last paragraph.” Orac’s piece struck me as a defense of Science as My Religion. Orac disparages Lehrer for giving fuel to the non-believers, not for being wrong about the facts.
Lehrer is right when he says, “When the experiments are done, we still have to choose what to believe.” At any given point in time, you have to decide how to treat your patient with the available evidence, not with some final answer, which may not be available for decades to come. When the patient needs treatment today, you’ve got to decide whether you believe that the latest drug or surgical technique actually works as well as its initial publication claims, or at least well enough to justify trying it on this patient.
Lehrer says, and Orac doesn’t disagree, that if you’ve been paying attention to the big picture, then you really shouldn’t put your faith in the claims of any initial publication.
bladesman said,
May 31, 2011 at 1:00 pm
@ferguskane
I agree, there are indeed huge problems with multiple comparisons, which can be negated to some extent with FWE and the like. As you rightly point out, there is a pressing need to register trials etc before hand. Occasionally you can spot the papers in which the a priori hypothesis was undoubtadly post hoc and the story was weaved to fit around it. Unfortunately editors don’t see null findings as sexy, irrespective of how flawless the research may have been.
omnisdjw said,
June 2, 2011 at 4:18 pm
Isn’t this prompting just a verion of confirmation bias:
en.wikipedia.org/wiki/Confirmation_bias
richardelguru said,
June 2, 2011 at 5:37 pm
@Melbo
Clever, but doesn’t that mean ‘fear of death preserves me’??
Melbo said,
June 3, 2011 at 12:45 pm
@richardelguru
Yes, but conservat sounded enough like conturbat to be a pun, and my latin is not good enough to find a word for “makes me conservative”. In any case the fear of death could be said to be preservative. Thanks for noticing!
Concestor0 said,
June 4, 2011 at 4:44 pm
A startling example of scientific bias exhibited by commercial interests and consumers alike is seen in modern health care. Diet, exercise and environmental exposure are near panaceas’ for health and longevity yet the focus remains on the next new pill or surgical procedure. The average consumer may find a new pill, physically and figuratively, easier to swallow than exercising and avoiding sugar(for example) and a drug manufacturer or investor will clearly have a bias.
Becel is sponsoring a bike ride for heart disease research this weekend. A margarine manufacturer supporting research into heart disease? How unbiased are the published findings going to be using their funds? The bike riding part is good though.
mlmdevelopers said,
June 10, 2011 at 10:18 am
Hey,
I am agree with your opinion. I am interested to get further information about all that. I have bookmarked blog, thanks for sharing information.
Santosh
joey89924 said,
November 22, 2012 at 1:16 am
he average consumer may find a new pill, physically and figuratively, easier to swallow than exercising and avoiding sugar(for example) and a drug manufacturer or investor will clearly have a bias.
2SC1969