Monday, February 13, 2017

How to Spot a Fake Science News Story

By Alex Berezow — January 31, 2017 @ The American Council on Science and Health

After more than six years in science journalism, I have reached two very disturbing conclusions about the craft.

First, too many science journalists don't actually possess a well-rounded knowledge of science. In many cases, regular reporters are asked to cover complex science and health stories. What we end up with is entirely predictable: Articles that are nothing more than rehashed press releases, topped with click-bait headlines based on exaggerations and misunderstandings of the original research. That's how a nonsensical story like Nutella causing cancer goes viral.

Second, science journalists are every bit as biased as their more traditional counterparts, perhaps even more so. They routinely hold double standards in regard to analyzing science policies. They conflate scientific evidence with science policy, immediately labeling anyone "anti-science" if he or she disagrees with their cultural beliefs. Worse, science journalists feel no inhibition whatsoever to cheerlead openly for their favorite politicians and to heap scorn upon those they dislike. Just read Twitter.

Both cultural bias and thoughtless reportage severely erode the integrity of science journalism. While the former is bad enough, the latter is particularly troubling because it also undermines public health.

How to Detect a Fake Science News Story

Often, I have been asked, "How can you tell if a science story isn't legitimate?" Here are some red flags:
  • 1) The article is very similar to the press release on which it was based. This indicates whether the article is science journalism or just public relations.
  • 2) The article makes no attempt to explain methodology or avoids using any technical terminology. (This indicates the author may be incapable of understanding the original paper.)
  • 3) The article does not indicate any limitations on the conclusions of the research. (For example, a study conducted entirely in mice cannot be used to draw firm conclusions about humans.)
  • 4) The article treats established scientific facts and fringe ideas on equal terms.
  • 5) The article is sensationalized; i.e., it draws huge, sweeping conclusions from a single study. (This is particularly common in stories on scary chemicals and miracle vegetables.)
  • 6) The article fails to separate scientific evidence from science policy. Reasonable people should be able to agree on the former while debating the latter. (This arises from the fact that people ascribe to different values and priorities.)
  • 7) The article ties the research to something only tangentially related. (For example, stories on infectious disease often try to highlight the application to bioterrorism.)
  • 8) The article is based on research from a journal that nobody has heard of.
  • 9) The article is about.
  • 10) The article is from the Daily Mail, Huffington Post, Mother Jones, Natural News, or any number of environmentalist, health activist, or food fad websites.
Separating real news from fake news is one of the bigger challenges facing our society in 2017. A recent poll reveals that 84% of Americans think fake news may be hurting the country. We must figure out a solution before it gets any worse.


No comments: