“How to read articles about health” – by Dr Alicia White

September 16th, 2009 by Ben Goldacre in guest writers, media | 31 Comments »

This is something that came up on the Five Live discussion with Lord Drayson at lunchtime today. Simon Mayo pulled out a front page story from the Express about a breakthrough cancer drug, and asked us what we’d make of it. Having not read it, I said I’d regard it with caution, because it might be true, but being on the front page of the Express is not necessarily a reliable predictor of something being true, as this story would attest, to choose just one example. Lord Drayson felt that was unfair, and that people can decide for themselves if a story is good or bad.

I think that’s optimistic, but we can certainly do something productive to give people a fighting chance. People often ask “how can I spot bad science in a newspaper article?” as if there were a list of easy answers, and it can be very difficult – given the lengths newspapers go to in distorting evidence, and witholding facts – but here is an excellent set of pointers. It’s written by Dr Alicia White from the Behind the Headlines team, and this is a resource I cannot recommend highly enough: they describe, in everyday language, the actual scientific evidence behind each day’s major health news stories, and it’s getting serious traffic now. I played a tiny role in helping to set it up, I think the service is fantastic, and with their permission (from yonks ago but I just noticed that the post got lost in some web hole) I’m republishing their guide to reading a health news story in full, below. It’s cracking, and deserves as wide an audience as possible. A PDF of the article is also available from the NHS site here.

Post in the comments if you think there are any major areas or tips which have been missed, and enjoy:

How to read articles about health and healthcare

By Dr Alicia White

If you’ve just read a health-related headline that’s caused you to spit out your morning coffee (“Coffee causes cancer” usually does the trick) it’s always best to follow the Blitz slogan: “Keep Calm and Carry On”. On reading further you’ll often find the headline has left out something important, like “Injecting five rats with really highly concentrated coffee solution caused some changes in cells that might lead to tumours eventually. (Study funded by The Association of Tea Marketing)”.

The most important rule to remember: “Don’t automatically believe the headline”. It is there to draw you into buying the paper and reading the story. Would you read an article called “Coffee pretty unlikely to cause cancer, but you never know”? Probably not.

Before spraying your newspaper with coffee in the future, you need to interrogate the article to see what it says about the research it is reporting on. Bazian (the company I work for) has interrogated hundreds of articles for Behind The Headlines on NHS Choices, and we’ve developed the following questions to help you figure out which articles you’re going to believe, and which you’re not.

Does the article support its claims with scientific research?

If an article touts a treatment or a lifestyle factor that is supposed to prevent or cause a disease, but doesn’t give any information about the scientific research behind it, or refers to research that has yet to be published, then treat it with caution. A lot of caution, like balling the article up and throwing it in the (recycling) bin.

Is the article based on a conference abstract?

Another area for caution: news articles based on conference abstracts. Research presented at conferences is often at a preliminary stage and usually hasn’t been scrutinised by experts in the field. Also conference abstracts rarely provide full details about methods, making it difficult to judge how well the research was conducted. For these reasons, articles based on conference abstracts should be no cause for alarm. Don’t panic or rush off to your GP.

Was the research in humans?

Quite often the “miracle cure” in the headline turns out to have only been tested on cells in the laboratory or on animals. These stories are often accompanied by pictures of humans, creating the illusion that the “miracle cure” came from human studies. Studies in cells and animals are crucial first steps and should not be undervalued. However, many drugs that show promising results in cells in laboratories don’t work in animals, and many drugs that show promising results in animals don’t work in humans. If you read a headline about a drug or food “curing” rats, there is a chance it might cure humans in the future, but unfortunately a larger chance that it won’t. So no need to start eating large amounts of the “wonder food” featured in the article.

How many people did the research study include?

In general, the larger a study the more you can trust its results. Small studies may miss important differences because they lack statistical “power”, and small studies are more susceptible to finding things (including things that are wrong) purely by chance. You can visualise this by thinking about tossing a coin. We know that if we toss a coin the chance of getting a head is the same as that of getting a tail – 50/50. However, if we didn’t know this and we tossed a coin four times and got three heads and one tail, we might conclude that getting heads was more likely than tails. But this chance finding would be wrong. If we tossed the coin 500 times – gave the experiment more “power” – we’d be much more likely to get an even number of heads and tails, giving us a better idea of the true odds. When it comes to sample sizes, bigger is usually better. So when you see a study conducted in a handful of people, proceed with caution.

Did the study have a control group?

There are many different types of studies, and they are appropriate for answering different types of questions. If the question being asked is about whether a treatment or exposure has an effect or not, then the study needs to have a control group. A control group allows the researchers to compare what happens to people who have the treatment/exposure with what happens to people who don’t. If the study doesn’t have a control group, then it’s difficult to attribute results to the treatment or exposure with any level of certainty.

Also, it’s important that the control group is as similar to the treated/exposed group as possible. The best way to achieve this is to randomly assign some people to be in the treated/exposed group and some people to be in the control group. This is what happens in a randomised controlled trial (RCT) which is why they are considered the “gold standard” way of testing the effects of treatments and exposures. So when reading about a drug, food or treatment that is supposed to have an effect, you want to look for evidence of a control group, and ideally evidence that the study was an RCT. Without either, retain some healthy scepticism.

Did the study actually assess what’s in the headline?

This one is a bit tricky to explain without going into a lot of detail about “proxy outcomes”. To avoid doing that, here is the key thought: the research study needs to have examined what is being talked about in the headline and article. (Somewhat alarmingly, this isn’t always the case.) For example, you might read a headline that claims “Tomatoes reduce the risk of heart attacks”. What you need to look for is evidence that the study actually looked at heart attacks. You might instead see that the study found that tomatoes reduce blood pressure. This means that someone has extrapolated that tomatoes must also impact heart attacks, as high blood pressure is a risk factor for heart attacks. Sometimes these extrapolations will prove to be true, but other times they won’t. So if a news story is focusing on a health outcome that was not examined by the research, treat it with a grain of salt.

Who paid for and conducted the study?

This is a somewhat cynical point, but one that’s worth making. The majority of trials today are funded by manufacturers of the product being tested – be it a drug, vitamin cream or foodstuff. This means they have a vested interest in the results of the trial which can affect what the researchers find and report in all sorts of conscious and unconscious ways. This is not to say that all manufacturer-sponsored trials are unreliable. Many are very good. But it’s worth looking to see who funded the study to sniff out a potential conflict of interest for yourself.

Should you “shoot the messenger”?

Sometimes journalists take a piece of research and misrepresent it, making claims the scientists themselves never made. Other times the scientists or their institutions over-extrapolate, making claims their research can’t support. These claims are then repeated by the journalists. Given erroneous claims can come from a variety of places, don’t automatically ‘shoot the messenger’ by blaming the journalist.

How can I find out more?

It’s not possible to cover all the questions that need to be asked about research studies in a short article, but we’ve covered some of the major ones. For more, go to Behind the Headlines at www.nhs.uk/news for daily breakdowns of healthcare stories in the media.


For more on the important issue of academic PR people (not to mention businesses) being shifty as well as journalists, you might also want to check out this piece I wrote in May 2009:

Dodgy academic PR

If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks! ++++++++++++++++++++++++++++++++++++++++++

31 Responses

  1. aggressivePerfector said,

    September 16, 2009 at 4:05 pm

    These guidelines are excellent. While I’m not a medical scientist, though, it strikes me that it might be beneficial to highlight the difference between a randomised controlled trial, correctly identified as the gold standard in medical research, and an epidemiological study, which may look highly impressive with thousands or tens of thousands of subjects, but which in reality has very low power when it comes to establishing cause and effect.

  2. KatLArney said,

    September 16, 2009 at 4:11 pm

    We also have a handy (but slightly simpler) guide on the Cancer Research UK website: info.cancerresearchuk.org/cancerandresearch/fact_or_fiction/


  3. talroze said,

    September 16, 2009 at 5:22 pm

    One link in the publishing chain that doesn’t get a mention in the “shoot the messenger” part that I think is sometimes a source of mis/disinformation: newspaper sub-editors.

    Things are changing in newsrooms, and subs may be a diminshing breed, but for the most part, it will still be a sub-editor that writes the headline on top of the article, and not the journalist with the by-line.

    Because of the process of newspaper production, the space allocated to an article will often change significantly; at very short notice a new headline might be needed, perhaps with space for only a half-dozen (usually short) words.

    Good subbing is a real art, and producing an attention-grabbing headline without distorting what can be very nuanced *facts* in an article is not easy. Especially if you only have a few minutes to do it. When it goes wrong, the first the journalist who wrote the story may know about it is the next morning, so they might not always be the messenger that needs shooting!

  4. Ben Goldacre said,

    September 16, 2009 at 5:41 pm

    talroze: agreed, which is why i’m more reluctant to name journalists than i was, say, 6 years ago, because i know often the distortion isn’t theirs, or at least not theirs alone. like all the best problems these are often systems failures, and that’s kind of what makes them interesting.

  5. caspararemi said,

    September 16, 2009 at 5:53 pm

    hi Ben,
    Your first link to “Behind the Headlines” is broken.

  6. MedsVsTherapy said,

    September 16, 2009 at 7:23 pm

    Good advice.
    It is best to favor studies which already match your pre-existing beliefs, and be skeptical of those that do not. This rule of thumb is a handy guide to most of life.

    But (a little more) seriously:

    I feel a little more confident if there is a PhD somewhere in the author list. If it is all MD, then I know I am sure to find at least a couple of what you might call “liberties,” or “quirks.”

    Google the lead author’s name, plus the pharmaceutical company that would be served by a positive result. Maybe we should develop a formula including the number of pharma companies for whom each doc has shilled, number of shills in author list, etc. This could yield a likelihood of independent replicability quotient.

    Realize that headlines are generally written by headline-writers. While many of us are able to actually read the majority of news or science stories in the typical daily newspaper before the first cup of coffee gets cold, we need to understand that the headline writer can’t possibly read each story every day, and so will go for “catchy” over “accurate.” Also, realize that writing a headline is similar to trying to write a study title with no after-the-colon allowed.

    Also: a classic rule of thumb in reading science news: Where are the dead bodies!!!???!! If multiple household cleanser chemicals are fatal, why are there so many housewives still shopping on the cleaning supply aisle? With healthy screaming young children, no less! Why is my auto mechanic still upright after all these years?

  7. MedsVsTherapy said,

    September 16, 2009 at 7:24 pm

    Sorry – I am on the other side of the pond. I meant “tea,” not “coffee.”

  8. Ysabel Ekaterin said,

    September 16, 2009 at 8:55 pm

    Excellent advice – and the general principles can also be applied to any article on any subject, such as science, politics, history etc.
    I’m just Jane Average, but I am a freelance writer, and I know that most things work on a “pyramid” method – the first paragraphs contains all the “hard” data/facts because if there needs to be cuts in column inches it still needs to make sense.
    I would quibble that the first point listed should be what is actually listed as 7th, about money. No.1 rule no matter what you are reading about: follow the money. Who benefits financially from “cappucinos cure cancer” articles?
    This ties in with what is listed first (which would be my No.2) about scientific research, to which I would also add – is there any backing up the article, and who was bankrolling the scientists conducting it. In the 19th Century, scientists were rich like Lord Cavendish and Charles Darwin, motivated by intellectual curiosity who could bankroll themselves. Today, many scientists get funding only if their research agrees with the agenda or prejudices of those writing the cheques; AIDS research and the BMI scam being classic examples. Someone with a mortgage to pay and a family to take care of is all too often tempted towards the path of least resistance. The reason “Piltdown Man” escaped exposure as a fake for 40 years wasn’t its utter brilliance – it was rather crude, but many people realised how much money would be lost and reputations damaged when the truth came out.
    I also teach freelance, including creative writing – this is an excellent article to signpost people to for when they want to critically evaluate what is presented to them as “fact”, whether in newspapers, books, TV programmes or at conferences where I have sat through people pontificating absolute nonsense with highly intelligent, learned professonals in the great man/woman’s line of sight looking like synchronised nodding dogs – Dr Goldacre’s book Bad Science could have been describing these when he talked about the “brain gym” being inflicted on school kids in the name of science.

  9. tomspradlin said,

    September 16, 2009 at 9:01 pm

    I have a couple of comments about the “How many subjects did the research study include?” section. First, yes, smaller studies always have lower power than larger studies (for an effect of a given size), but any given small study might have plenty of power for an effect size that the reader is interested in. I think it’s wrong to damn small studies from the outset. And besides, what is a “small study.” How small is small?

    The second comment is to point out an outright error. It says “small studies are more susceptible to finding things (including things that are wrong) purely by chance.” This is not true. The statistical methods in use in medical research today set a type 1 error rate at an arbitrary level, usually 5%, for any sample size. As long as we do this, the chance of finding things that are wrong is 5% for any study of any size.

  10. olster said,

    September 17, 2009 at 5:46 am


    Small studies… From the papers I remember reading you may get pilot studies or simply a collection of case studies that number less than ten. Large studies seem to span off into the distance with tens of thousands plus… Not cheap.

    Small studies do have their place, but if you are looking for a small effect (maybe a few percent improval in survival after incident X) then you just won’t be able to see that with a small study.

    I think it’s worth looking into what’s known as a ‘power calculation’ or rather ‘statistical power’ (wikipedia links at the end). By performing an appropriate power calculation you can work out how many people you need in your study to be able to (with 95% (or whatever) confidence) establish a significant difference between two groups.

    I can’t think of a better way of putting it right now… Sorry- It’s lunchtime and I didn’t have my morning coffee…

    “The second comment is to point out an outright error. It says “small studies are more susceptible to finding things (including things that are wrong) purely by chance.” This is not true.”

    Well, actually it is.
    You are more likely to be fooled into rejecting your null hypothesis when you have an under-powered study. The coin toss example is perfect.
    If that doesn’t demonstrate a Type 1 error due to small sample size then I don’t know what does.

    “The statistical methods in use in medical research today set a type 1 error rate at an arbitrary level, usually 5%, for any sample size. As long as we do this, the chance of finding things that are wrong is 5% for any study of any size.”

    I think you are putting the result before the method if that makes sense. The method and sample size are responsible for making sure that the type 1 error is 5% or less. Again, the power calculation comes in. It is the ‘so long as we do this’ part of your comment that says we make sure that sample size is big enough. The ‘big enough’ is set by the power calculation.

    Rather badly, a lot of published studies are underpowered. Either they didn’t do a power calculation, got it wrong, couldn’t be bothered/didn’t have the cash for a bigger sample size or were re-using data collected for something else by someone else. This means that you should always pay attention when reading a paper. If they don’t mention a power calculation, chances are they didn’t do one. If you are reading the paper thoroughly then you should do one!

    The paragon of medical research is to only have studies that are of high-enough sample size to look for the size of difference that you are interested in. We just aren’t there yet.



  11. olster said,

    September 17, 2009 at 5:49 am

    There was a classic type 1 example describe when I was back at uni- something to do with an asthma drug… Several studies showed mixed (mostly no-sig diff) results and it was only when the meta-analysis was published that significance was demonstrated…

    Anyone remember it?

  12. msjhaffey said,

    September 17, 2009 at 7:58 am


    Thank you for another excellent article.

    Your link entitled “Behind the Headlines” goes to another of your blog entries rather than the Behind the Headlines website.

  13. olster said,

    September 17, 2009 at 8:30 am

    I think he cunningly hid the link within the given link…
    I suspect it should be this:

    (raw is this www.badscience.net/2009/09/how-to-read-articles-about-health-by-dr-alicia-white/www.nhs.uk/news)

    the link behind the link…

  14. snoozeofreason said,

    September 17, 2009 at 9:19 am

    Great article. My only quibble is that the warning about small studies might encourage readers to believe that if you make your study large enough then all your problems are solved.

    This is not necessarily so. Studies may indeed give misleading results because of a lack statistical power and making the sample size bigger does reduce this risk. However (observational) studies of any size can also give misleading results because of confounding factors. Increasing the sample size does not, in itself, do anything to reduce the risk of confounding. Of course if you have a large sample size then you may have more opportunity to gather the kind of data that would eliminate confounders, but you have to ask the right questions, and you may not know what they are. So even really large studies, like the Million Woman Study may need to be treated with a certain degree of caution.

  15. mikewhit said,

    September 17, 2009 at 9:26 am

    On the point about subeditors – surely they are supposed to be _more_ experienced and qualified than the writers they edit – or have I misunderstood the way journalism works ?

  16. pragmatist said,

    September 17, 2009 at 10:34 am

    Great stuff. Was glued to the debate last night. Love the book. Think your efforts are sound and you deserve the status as a speaker at the RI.
    i posted some thoughts about the debate elsewhere.
    I think the debate needs to widen, especially upon the topic of health promotional advice, the policies of governments, and the role and activities of agencies, regulators, and services within the general provision and advocacy of preventative measures.

  17. JoanCrawford said,

    September 17, 2009 at 11:20 am

    I’m with Ysabel (#8).

    As investigative journalists, detectives, and everyone else who wants to know what is going on have always said:

    “Follow the money”

  18. iamjohn said,

    September 17, 2009 at 12:28 pm

    Because it’s fun to be pedantic.

    “if we didn’t know this and we tossed a coin four times and got three heads and one tail, we might conclude that getting heads was more likely than tails. But this chance finding would be wrong. If we tossed the coin 500 times – gave the experiment more “power” – we’d be much more likely to get an even number of heads and tails”
    The probability of exactly equal heads and tails for 4 coins is 4C2/2^4 = 6/16
    The probability of exactly equal heads and tails for 500 coins is
    500C250/2^250 = well, I can’t work that out but it is much less than 6/16

  19. muscleman said,

    September 17, 2009 at 3:00 pm

    Re small numbers. It all depends on the nature of the study. Back in the day we sat down in the lab and wrt transgenic mice worked out how many independent examples of the same transgene pattern we needed to have significant confidence that it was real (transgenes can insert next to strong enhancers and be be driven by their insertion site rather than or as well as, their own control sequences). The calculation involves the size of the genome and it works out at 3 for greater than 95% confidence interval.

    This is how the first trials when a new drug is given to humans work for eg. and why the repeat of the Jesse Gelsinger incident (gene therapy caused cancer) indicated that the transgene was doing something funky (it was prerentially inserting next to a known risk gene).

  20. tomspradlin said,

    September 17, 2009 at 4:00 pm

    Actually, iamjohn, you are not being pedantic. The thrust of this blog is to help people understand science reporting, and that requires precision in language. And of course your calculation is correct.

    Even a huge study would have low power if we set the type 1 error rate at a ridiculously low level. For example, suppose we set it at ZERO. That is, we refuse to make a type 1 error, so we do not reject the null hypothesis, NO MATTER WHAT THE DATA SAY. Then the type 2 error rate is 100%, because we cannot possibly reject the hypothesis when it is false. And if the type 2 error rate is 100%, then the power is zero.

    The way we get away from this push-pull of the error rates is to set one of them arbitrarily, and that is the type 1 error rate, usually at 5%. Then we look to see what the power is for a given sample size. (And, this is often overlooked in these discussions, for a given effect size.)

  21. OMQ said,

    September 18, 2009 at 1:48 am

    Dr Alicia White’s article is fantastic and most of her points are supported by solid empirical data from an academic paper published in PLoS One titled:
    Characteristics of Medical Research News Reported on Front Pages of Newspapers (CLICK HERE)


    Ideally, although much of it is based on ‘common sense’ recommendations given to the media need to be evidence-based otherwise we fail to reciprocate what science and evidence-based medicine is all about.

  22. CoralBloom said,

    September 18, 2009 at 2:13 am

    Great debate Ben.

    And you win!

  23. fight diabetes said,

    September 18, 2009 at 7:32 am

    A very disturbing trend in “medical news” in today’s media is to report highly controversial data from a low level of evidence-based medicine. A classic example is with the type-2 diabetes drug Avandia. A cardiologist, who speaks for the only competitor of Avandia, essentially manufactured data with the absolute worst methodology using a flawed Peto method to run his “study”. This data was published using a meta-analysis, or pooled data from many small, short-term studies. The media needs to be cautious reporting these types of “studies” as they are truly only viable signals if confirmed at the highest level of medicine – that is, prospective, double blinded, placebo controlled, multi center, long-term, trials with independent experts adjudicating in-stream whether a chest pain from peporoni pizza was in fact an ischemic event, or simply just heartburn. Until the media shows greater respect for what drives appropriate care and risk, the general U.S. population will always believe what they report and sell. Many patients will needlessly be harmed by the actual “reporting” than will often be the case by a certain medication.

  24. jsymes said,

    September 18, 2009 at 1:02 pm

    @ talroze
    What you say about good subbing is correct. However, since Ben’s story kicked off with a reference to a story in the Express, it’s only fair to good subs everywhere to note that the Express sacked most of its subs last year, retaining only a handful. These days its headlines are mostly written by news editors, hardly the dispassionate, sceptical, “second look” provided by a diligent sub. The news editor, after all, is probably the one who suggested the story’s “angle” to start with, and will come up with a headline to fit that preconception – no matter what the actual story says, or doesn’t say. A similar scenario obtains at the Telegraph, too.

  25. Zach Constantine said,

    September 19, 2009 at 6:50 pm

    I’ve been reading Gilbert, Tafarodi, and Malone (“You can’t not believe everything you read.” 1993) which strongly suggests that being exposed to information (particularly in a distraction-prone environment – think “browsing headlines at work”) results in automatic bias toward acceptance of the information as truth…

    … explains many sociological phenomena, (I’ll take the prevalence of disinformation and superstition as demonstrative proof) but it’s a rather sad thing to accept such basic fallibility in human nature.

  26. MedsVsTherapy said,

    September 21, 2009 at 5:17 pm

    The RCT, while very valuable, should not quite receive the reverence. If you only had one study and desired to make a bet about efficacy / causality, a decent RCT, with triple-blinding (blind the pill provider, the pill receiver, and the outcome assessor) would be the preferred design. However, the RCT is not the pinnacle or final word on a causal connection. Sir Bradford Hill had “experiment” as only one of his suggested nine areas to consider when evaluating causality in health issues.

    Here are a couple references to look at to see that the RCT should not be worshipped. First: Colagiuri, Morley, Boakes, and Haber, Psychotherapy and Psychosoamatics 2009 v78 pp 167-171: “Expectancy in Randomized Controlled Trials.” Short story: a priori hypothesis failed: no effect for drug, vs. placebo, intended to assist quitting alcohol; however, post hoc analysis: for participants in both the drug and placebo group, those who BELIEVED that they were getting active drug not placebo reported lower avg cravings for alcohol, and lower number of drinks per day. Thus, a technically ideal study suffers mightily from bias. You could argue: well, that was a post hoc analysis. To which I might argue: yes, and darn glad we have it, when our RCT research is filled with many such biases never detected by any analysis, regardless of Latin name. In this example, expectanciy could have been reduced by an active placebo – just to show that it is not the design per se that yields the truth.

    Another example: Concato, Shah, and Horwitz, NEJM 2000 v. 242 pp 1887-1892: “Randomized, Controlled Trials, Observational Studies, and the Hierarchy of Research Designs.” Short story: across a range of clinical topics, the obtained point estimates for the efficacy of some intervention (prevent stroke by lowering hypertension, etc.) varied greatly across a handful of RTCs, but did not vary much across a handful of observational studies; conclusion is that the inclusion/exclusion/subject recruitment and selection process of an RCT leads each time to a different set of participants who are biased in various unknown ways away from their respectful populations.

    All this to say that no study in itself is ideal, and we need to review Bradford Hill; journalists would do well to read this classic, and live by its wisdom: Hill AB. 1965. The Environment and Disease: Association or Causation? Proceedings of the Royal Society of Medicine, 58, 295-300.

  27. Daibhid C said,

    September 21, 2009 at 10:46 pm

    iamjohn – It’s been a while since I did statistics, but I think the point is that with 500 coins, you’d be more likely to get a result close to 50/50. Not exactly 50/50, but sufficient to spot the trend. The results “smooth out” over larger numbers.

  28. stevenstevo said,

    October 8, 2009 at 7:54 pm

    Great article. I must say though I am a little confused by some of the posts on here.

    As for the headline writer thing, everyone realizes that papers go for “catchy” over accuracy. First off, that’s just the point: putting out misleading headlines and articles in a newspaper about health is irresponsible and unethical. Such a practice is yet another result of an industry focusing entirely on short-term profits instead of sustainable value. In addition, headlines have a larger font and thus are more readily noticeable by the public. Just because we can all take the fine print doesn’t mean it’s okay to install widespread fear.

    Misleading and biased journalism is slowly running its course–it’s not a coincidence that we have witnessed many large news conglomerates go bankrupt lately. Most people either realize how misleading media health reports can be. A lot of people are stupid though, and they love such stories because it provides a distraction from their stupidity. Deep down they know, or at least should know, that nothing within their control poses as much of a threat as their lack of exercise, or their smoking habit, or their affinity for Fil-0-Fish sandwiches at McDonald’s.

    These stupid stories are getting old every single day. It’s amazing how far it went though: the shark attacks on the TV news networks I will never forget (all 54 of them that year). My roommate one year in college used to really get into it–he was literally scared he was going to get killed by a shark, yet we were over 1,000 miles away from the nearest ocean.

  29. stevenstevo said,

    October 8, 2009 at 8:20 pm

    I think if people took the time to look into the numbers behind many of the health stories they see or read, then they would not only be shocked, they would then be able to avoid being misled in the future.

    It’s really quite amazing–SARS, West Nile Virus, Small Pox, Ebola, swine flu, etc… I’ll never forget the dude I saw in the airport about a month ago wearing a surgeon’s face mask–very creepy looking. Dude probably thought he was cool because he was flew a lot for his cool job and thus needed to wear the mask. The reality is that dude has a greater chance of getting hit by lightning twice in the next month.

    The problem with all of the misleading media is often times the underlying news is legitimate, and perhaps even of the very type of sensationalist nature that the media loves. In addition, by design the media gravitaties toward publishing misleading stories on items/issues that are inherently difficult for the public to question. Like health stuff–disbelieve an article in the New York Times about red meat being very dangerous is not right–the risk of me falsely disbelieving it is that I continue to eat too much read meat and die. Plus, it just sounds bad to question stuff like universal health care, global warming, etc. And indeed, sometimes the headlines and the news stories are unbiased and accurate. It almost sounds funny just to type that, but it does happen. The swine flu is an excellent example: apparently it is a more legitimate health concern than SARS and all the other viruses of the month that the media shoved down our throats. However, I simply do not know what to think about it–I am unable to determine from the articles I read if that is a serious thing. Sure, I know there is a chance, albeit small, that it spreads like wildfire and kills everyone.

    I do know this though, I can improve my health and chances for a long life thousand times more by exercising, not eating at McDonald’s, washing my hands regularly, driving safely, minimizing stress, and so on.

    Once I do those things better, maybe then I’ll start worry about getting short, bit by a shark or dying from second hand smoke.

    Crap, all that stuff takes a lot of willpower though. So much stuff out there is bad for us, and we are all doomed anyways–we will run out of oil soon, plus global warming will cause the sea level to rise way too much–in light of such crazy things out there, worry about watching my weight seems trivial.

    It won’t matter if I’m in good shape when China overtakes us–apparently their current growth rates can be assumed to continue in perpetuity.

  30. wayscj said,

    November 21, 2009 at 6:40 am

    ed hardy ed hardy
    ed hardy clothing ed hardy clothing
    ed hardy shop ed hardy shop
    christian audigier christian audigier
    ed hardy cheap ed hardy cheap
    ed hardy outlet ed hardy outlet
    ed hardy sale ed hardy sale
    ed hardy store ed hardy store
    ed hardy mens ed hardy mens
    ed hardy womens ed hardy womens
    ed hardy kids ed hardy kids ed hardy kids

  31. Windows 7 Professional said,

    December 21, 2009 at 8:37 am

    Big Discount! Microsoft Office 2007 $110 and Windows 7 $139 on www.software-hotbuy.com/, Office 2007 Ultimate
    Office Professional 2007
    Office 2007 Professional
    Windows 7 Professional
    Windows 7 Ultimate
    windows vista ultimate
    Windows Vista Business
    Flash CS4
    Illustrator CS4
    Photoshop cs4
    Master cs3
    Acrobat 9
    Dreamweaver cs3