The Power of Small Data: Weigh the evidence before you report it
Taking the time to understand the quality and the weight of health evidence before writing a news story seems impossible with the insane pace of news gathering and posting today.
As a reviewer for HealthNewsReview.org, I often find myself wanting to give reporters a pass for missing key details when writing about a scientific study. “How can they be expected to read a study, find experts to help them sort through it, and write a compelling and accurate piece in a matter of hours?”
Then I remember that most studies are shipped out days – if not weeks – in advance to reporters under embargo. I remember that many studies have been previewed at conferences months prior. I remember that if journalists are doing their jobs, they will have established the right contacts to call, right sites to visit, and right processes to vet evidence quickly in order to write about important work.
I wrote last week about how important it is to examine data from multiple angles when writing about it, but I wasn’t really talking about kicking the data’s tires. I have written many posts over the years about how to assess quality, drawing a lot on my experience as a reviewer for HealthNewsReview.org. To boil it down, though, I’ll give you my top 10 tips, starting this week with the first five:
1. Cover the health questions that matter. When you are reviewing a dataset to develop a story or when a news release on a health study crosses your desktop, ask yourself, "Does my audience need this story?" I wrote in 2012 about the surprising amount of coverage for a purely theoretical music therapy study with no clinical application. This doesn’t mean you should not write about science that helps advance science. Please do. But make it clear that what you are writing about isn’t something that is going to improve the lives of people in the near or even long-term future.
2. Consider scope and impact. One of the first and easiest things to consider when mining a dataset is size. If you find that 10,000 people may have been misdiagnosed with a disease, that’s a bigger finding than just 10. Or if your findings cover 30 years in time, that will have a lot more weight with your audience than a few months. There’s also the question of how big an impact this finding will have. For example, even if 10,000 people have been misdiagnosed, if that was over a 30-year span and, during the same time, 500,000 people were accurately diagnosed, your findings are going to seem fairly paltry. If the solution to the problem is a $1 billion investment in some new technological intervention or perhaps the invention of something that currently doesn’t exist, then your findings are going to seem ridiculous. Try to get a sense early on of where the data might take you.
3. Don’t cherry pick the data to scare your audience. Back in 2013, I wrote about scientific studies and mass media stories that did just that. They took snapshots of health problems during small windows in time and presented them to the world as fairly definitive evidence that the Fukushima nuclear disaster was causing thyroid disorders in the United States. Thankfully, those badly drawn conclusions are mostly a distant memory, so much so that Sarah Fallon at Wired magazine wrote a great piece more recently explaining that even cancer rates within Japan related to the disaster have been blown out of proportion.
4. Pay close attention to the difference between harm and an indicator of harm. So often, reporters write about a “spike” or a “cluster” or a “dramatic rise” in something, the assumption always being that this something is going to kill or maim more people. These spikes and clusters, though, can be the result of more screenings – note that I’m not saying better screenings. In the Wired piece on Fukushima, Fallon writes that in the 1990s, South Korea started screening more people for thyroid cancer. That led to what, on paper, appeared to be a 15-fold increase in cancer, according to H. Gilbert Welch, a professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice. Fallon wrote:
How did South Korea combat this surge in cancer cases? A group of doctors (including Welch) wrote a letter in 2014 discouraging screening with ultrasonography. Poof. Thyroid operations dropped by 35 percent in a year. Because the best test 'isn’t one that finds the most cancer,' he says. 'The best test is one that finds the cancers that matter.'
Next: How meaningful comparisons can bring your data to life.
[Photo by Abd allah Foteih via Flickr.]