Tricks of the Trade: Finding Nuggets In the River of Medical Studies
Covering medical research is a lot like panning for gold. There's a lot to wade through before you're sure you've got a nugget that will make sense to laymen without exaggerating either the promise or peril.
There's no shortage of research to report. That's part of the problem: We too often give our readers and viewers whiplash by reporting dueling studies, sometimes within mere days or weeks of each other. Caffeine's bad for you. Nope, it's good. Aspirin prevents cancer. Nope, it doesn't. Hormone therapy's too dangerous to use (well, maybe not in this case or that one).
That whiplash isn't hard to avoid. It starts with a reporter's need to understand and then convey some simple truths: Science evolves. It builds slowly, usually through one methodical study at a time. Findings must be reproducible by other researchers. "Eureka!" moments are pretty rare. If you offer a more global view - that today's news is but a step in ongoing research - you'll help readers understand why they might have read the opposite not too long ago.
Then there's the context of that day's news. Is this a big step forward or a little one? Does the research involve a disease, such as cancer or diabetes, which affects many people or a relative few? Does it affect them now, or later, and only if the next step pans out? Or is it just interesting to know?
If the story is about a potential treatment, first we need to know how bad the disease is and how well existing therapies work. Then we need to consider how big a benefit the new therapy offers. And for whom? If the drug was tested only in left-handed redheads, there's a problem. That's not as far-fetched as it sounds, since treatment trials have very strict inclusion rules. People with multiple illnesses, especially, tend to be excluded.
And when you're writing about treatments, don't forget the risks. Every drug comes with side effects. To ignore that is to mislead.
Hint: "Breakthrough" is a really overused word. No matter how often it appears in press releases, be skeptical.
Before you can start choosing words, however, you've got to decide whether a study is worth reporting at all. It's a given that it must be new and interesting. But to judge its newsworthiness, you must evaluate the science. Ask yourself these questions:
- What is the source? Studies published in top-tier, peer-reviewed medical journals - for instance, Science, the Journal of the American Medical Association, and the New England Journal of Medicine - are the most respected. But newsworthy research also may be published in obscure journals, presented at conferences, or even released as summary results in drug-company press releases. Beware of research presented only in a press release. And be aware that even studies presented at the most respected medical meetings are still essentially preliminary; other scientists have little actual data to scour for flaws.
- Who paid for the research? And who performed it? Check not only the credentials of the researchers, but look for potential financial conflicts of interest, such as whether they hold a patent on the work or are connected with a private biotech venture (which is common among university researchers today). Commercial connections require disclosure.
- What are the study's flaws? Every study has inherent flaws, and good researchers will disclose them during an interview. Always ask if there's another explanation for the finding.
- What is the type of study? Every story doesn't have to be based on that ideal randomized, double-blind, placebo-controlled trial. Every so often, it's even okay to write about research in mice or petri dishes. But you have to understand the weaknesses of the numerous different kinds of studies - case-control, cohort, retrospective vs. prospective, etc. - so as not to overplay their significance.
A Phase I study, for instance, is never ever going to prove anything except that more research is needed. And, boy, have a lot of mice been cured of cancer with substances that failed miserably the first time they were tried in people. That's not to say that we shouldn't cover a mouse study that suggests a big step in understanding how a certain disease causes symptoms, or that the first few patients to respond to a cancer vaccine shouldn't make a headline. Those are just stories that require lots of very explicit cautions, stated up front, about what happens next. The National Cancer Institute offers a good basic explanation of the different types of clinical trials.
- What is the study's sample size and length? A six-month study of a treatment for a chronic disease, such as arthritis, has little practical value. Research showing that a particular type of chemotherapy reduces cancer death rates is more valuable than a shorter study that shows it made a tumor shrink.
- What is the relevance? Search on PubMed to see how the new work relates to other research in the field. Does it support or contradict past studies? Why? Is there a biologic rationale for the finding? The authors of these past studies may also be good sources to interview about the merits or drawbacks of the current research. Medical stories should include at least one independent expert to provide context. Build an address book of go-to experts who can steer you away from research with problems that may not be obvious to someone who's not an expert in that field.
- Does the math hold up? I know, many journalists hate math and aren't good at it. One of my most embarrassing rookie mistakes was in not double-checking some simple percentages in a study from the Centers for Disease Control and Prevention. Yes, math mistakes can slip through peer review. And there's far more involved than simple calculations. Invest in Victor Cohn's "News & Numbers," a great primer.
- Is the research statistically significant? If it is, that means that it could not have happened by chance. Rates, not a tally, tell the scope of a problem.
- What is the risk? Explaining risk is among the biggest hurdles. Say a weird new disease crops up that causes, oh, purple itchy spots followed by hearing loss. A few doses of a drug will cut in half your chances of getting sick. Cutting risk by 50 percent sounds pretty good, right? That's called relative risk, and it's the comparison many studies use.
But wait, how many people actually get sick? One of every million people. That's absolute risk. Suddenly a 50 percent cut seems less dramatic, especially if that drug comes with troublesome side effects.
There are times when reporting can both make or break the context of a story. I remember grappling with how risky hormone replacement therapy was when the government abruptly ended a major study a few years ago. Taking the hormones after menopause increased women's chances of a heart attack by 29 percent, and a stroke by 40 percent.
If you were taking hormones, was it time to panic? The researchers crunched some slightly more user-friendly numbers: For every 10,000 women who took the combo for a year, there would be eight more strokes and seven more heart attacks. It still was complicated, but hopefully that extra number helped women weigh their odds a little better.
This may sound like a lot of evaluation for every study that crosses the desk, but don't worry: Much of it becomes second nature, so you can quickly weed out the research that's not ready for prime time.
Lauran Neergaard is the Washington-based medical writer for The Associated Press.