BMJ 2007;334:1249 (16 June)
Why don’t journalists mention the data?
Have stories about “electrosensitivity” simply been lifted from those promoting this new diagnosis?
Sometimes, as a doctor who also writes in the newspapers, a dark thought comes across me: wouldn’t it be so refreshing -secretly, wouldn’t it feel so free – to leave the medical thing behind, and just make stuff up, say what I want, spin any story that pleases me, or any story that sells, and gaily ignore the evidence?
For two years now the British news media has been promoting the existence of a new medical condition, called electrosensitivity, or electromagnetic hypersensitivity. The story – or in medical terms the hypothesis – is that a wide range of symptoms are caused by acute exposure to electromagnetic signals, and that these symptoms are ameliorated by this signal being removed.
The features have a lot in common with what might often conventionally be called “medically unexplained symptoms”: tiredness, difficulty concentrating, headaches, nausea, bowel complaints, aches in the limbs, crawling sensations or pain in the skin, and more, for which no explanation is found. Such symptoms have existed since long before the appearance of “electrosensitivity,” and the absence of a clear cause is extremely troubling to both patients and doctors.
If these symptoms were caused by electromagnetic signals, then it should prove possible to study that, ideally in double blind conditions: and yet the media coverage invariably focuses on the scandal of how research into this area has been neglected. But the research has been done. In fact, dozens of double blind studies have been performed, but they have been systematically ignored by almost every single journalist covering the issue.
A typical experiment would involve a mobile phone, hidden in a bag for example, and each subjectâ€”chosen from people who report that their symptoms are caused by electromagnetic signalsâ€”recording their symptoms over time, without knowing if the phone is on or off.
There have now been 37 such double blind “provocation studies” published in the peer reviewed academic literature, and they are almost all negative, although you could argue that the evidence is unanimous. There are, to be clear, seven studies that did find some statistically significant effect for electromagnetic signals: but for two of those, even the original authors have been unable to replicate the results; for the next three, the results seem to be statistical artefacts (one tailed t-testsâ€”presumptuous, you might sayâ€”and problems with multiple comparisons); and for the final two, the positive results are mutually inconsistent (one shows worsened mood with provocation, and the other shows improved mood: still sure a one tailed t-test is reasonable?).
These studies test the very hypothesis reported on repeatedly in the media: symptoms are brought on by exposure to a source of electromagnetic signals, and cease when the source is removed. And not only are the studies ignored, but sometimes it feels like the media are actively teasing us. A recent Panorama documentary on BBC 1 covered the possible dangers of Wi-Fi computer networks, and what little evidence the programme did present was flawed in a number of ways.
A large chunk of the programme was devoted to electrosensitivity. It covered the question of testing the phenomenon, in a double blind study. The programme makers even followed someone into a lab at Essex University where they had participated in one provocation study. We are told that this subject did correctly identify when the signal was present or absent two thirds of the time, to a visual backdrop of sciencey looking equipment.
But this was anecdote dressed up as data. The study is currently unpublished. We don’t know the protocol, or whether 2/3 for one subject would be statistically significant (there may be only three exposures in total, for example). We don’t know the results of the other subjects. But most crucially, there is no mention that this single selected subject in a single unpublished study produced a result that seems to conflict with a literature of 37 studies that have been completed, published, and are overall negative. If this whole Essex study was positive, while it might make an interesting small splash next to the other 37, it would need to be replicated and considered in the context of the negative findings. The alternative is chaos, and being blown in the wind by every Type I error.
So why doesn’t the media ever mention this data? Perhaps they deliberately and mischievously leave it out. Perhaps they never came across it, and are incompetent. Or perhaps they simply lifted their stories verbatim from aggressive and well coordinated lobbyists who promote this new diagnosis (some of whom also sell expensive equipment to sufferers, such as insulating paint at £50 a litre, and insulating beekeeper hats for trips outdoors).
Not only do these lobbyists observe a monastic silence on the issue of the provocation studies, but they also viciously attack anyone who even dares to mention the data, accusing them of insensitivity, of attacking sufferers, and of denying the reality of their symptoms. Symptoms, of course, stand as real, regardless of their cause; and if you were going to offer guilt trips around, you could fairly argue that those who obfuscate on the causes are themselves hindering better understanding and treatment, and so harming patients.
Ben Goldacre, doctor and writer, London. email@example.com
I have written a lengthy summary of the 37 studies done so far, which is constantly updated, and resides here:
For added interest, here’s an image of how my article appears on the page in the BMJ.