Fukushima: Alarmist Claim? Obscure Medical Journal? Proceed With Caution
UPDATE: Click here for a response from International Journal of Health Services Editor-in-Chief Vicente Navarro.
The press release trumpeted a startling claim: researchers had linked radioactive fallout from the Fukushima nuclear disaster to 14,000 deaths in the United States, with infants hardest hit.
"This is the first peer-reviewed study published in a medical journal documenting the health hazards of Fukushima," the press release bragged in announcing the study's publication today. The press release, which compared the disaster's impact to Chernobyl, appeared via PR Newswire on mainstream news sites, including the Sacramento Bee and Yahoo! News.
Casual readers who didn't realize this was only a press release could be forgiven for thinking this was a spit-out-your-coffee story. But with a little online research and guidance from veteran health journalists Ivan Oransky and Gary Schwitzer, I quickly learned that there's a lot less to this study and to the medical journal that published it. Read on for their advice on what journalists can learn from this episode.
Normally, reporters are supposed to feel better about research that's been peer-reviewed before publication in a scientific journal. But the claims of the press release were just so outlandish, warning bells went off.
As it turns out, the authors, Joseph Mangano and Janette Sherman, published a version of this study in the political newsletter Counterpunch, where it was quickly criticized. The critics charged that the authors had cherry-picked federal data on infant deaths so they would spike around the time of the Fukushima disaster. Passions over nuclear safety further muddied the debate: both researchers and some critics had activist baggage, with the researchers characterized as anti-nuke and the critics as pro-nuke.
As Scientific American's Michael Moyer writes: "The authors appeared to start from a conclusion-babies are dying because of Fukushima radiation-and work backwards, torturing the data to fit their claims."
So how did such a seemingly flawed study wind up in a peer-reviewed journal?
I researched the journal, the International Journal of Health Services, and its editor, Vicente Navarro. Navarro, a professor at Johns Hopkins University's prestigious school of public health, looked legit, but the journal's "impact factor" (a measure of a research journal's credibility and influence) was less impressive. (I emailed and called Navarro for comment; I'll update this post if I hear back from him.)
I asked Ivan Oransky, executive editor of Reuters Health and co-founder of the Retraction Watch blog, and Health News Review founder Gary Schwitzer: how can journalists better evaluate when to cover (and more importantly, when not to cover) the medical research stories that cross their desks?
Their consensus: just because a study's peer-reviewed doesn't mean it's credible. And evaluating a journal's impact factor can be helpful, but it's not sufficient.
Here's what Oransky had to say:
I do use impact factor to judge journals, while accepting that it's an imperfect measure that is used in all sorts of inappropriate ways (and, for the sake of full disclosure, is a Thomson Scientific product, as in Thomson Reuters). I find it helpful to rank journals within a particular specialty. It's not the only metric I use to figure out what to cover, but if I'm looking at a field with dozens or even more than 100 journals, it's a good first-pass filter. There's competition to publish in journals, which means high-impact journals have much lower acceptance rates. And if citations are any measure at all of whether journals are read, then they're obviously read more, too.
I looked up the journal in question, and it's actually ranked 45th out of 58 in the Health Policy and Services category (in the social sciences rankings) and 59th out of 72 in the Health Care Sciences & Services category (in the science rankings).
As to how this could get published in a peer-reviewed journal, well, not all peer review is created equal. Higher-ranked journals tend to have more thorough peer review. (They also, perhaps not surprisingly, have higher rates of retractions. Whether that's because people push the envelope to publish in them, or there are more eyeballs on them, or there's some other reason, is unclear. But there's no evidence that it's because their peer review is less thorough.)
Finally, I'd refer readers to this great primer on peer review by Maggie Koerth-Baker.
Gary Schwitzer also provided these helpful tips for journalists:
1. Brush up on the writings of John Ioannidis, who has written a great deal in recent years about the flaws in published research.
2. Journalists who live on a steady diet of journal articles almost by definition promote a rose-colored view of progress in research if they don't grasp and convey the publication bias in many journals for positive findings. Negative or null findings may not be viewed as sexy enough. Or they may be squelched prior to submission. While perhaps not a factor in this one case, it nonetheless drives home the point to journalists about the need to critically evaluate studies.
3. In this case, a journalist would be well-served by a friendly local biostatistician's review.
4. It is always more helpful to focus on the quality of the study rather than the impact factor of the journal or the reputation of the researcher (for reasons Ivan articulated). However, these are legitimate questions to ask any published researcher: "Why did you choose to submit your work to that journal? Did you submit it elsewhere and was it rejected? If so, what feedback did you get from the peer reviewers?"
Related Posts:
Tricks of the Trade: Finding Nuggets in the River of Medical Studies
This Post is Not Embargoed: Ivan Oransky on What You Need to Know About Embargo Policies
Photo credit: raneko via Flickr