Reprinted with permission from the Columbia Journalism Review.
In a highly touted effort to improve the quality of hospital care, the federal government has started disclosing data that ostensibly reveals which hospitals are best (and worst) at keeping their patients safe. But a few weeks ago, Kaiser Health News presented some not entirely unexpected news that turned conventional wisdom about patient safety data into, well, not-so-conventional wisdom. A piece by Jordan Rau raised serious questions about the efficacy of the federal government's efforts to turn patients into savvy shoppers. The data, it seems, may not be ready for prime time.
I rang up Rau, a veteran health journalist and an expert in these matters, for a Q and A to help all of us who may be tempted to use the data in ways we probably shouldn't. Here's what he had to say:
Trudy Lieberman: What's the problem with the patient safety measures?
Jordan Rau: These metrics, which measure such things as serious blood clots and accidental cuts and tears, were created for a different purpose. The original aim was to help hospitals look at and track internal problems. They were not set up to compare one hospital with another.
TL: Isn't that what the hospitals are squealing about?
JR: Yes. The concern is that when you compare hospitals that are very different-for example, a teaching hospital that does a lot of complex surgery with a community hospital where fewer patients are going under the knife-you get a distorted result.
TL: Have hospitals themselves caused some of the problem?
JR: Yes. Some have become so meticulous and adept in their coding and billing Medicare that more mistakes are showing up, indicating they have higher error rates. So was something really a problem, or an exaggeration due to exhaustive coding? It's hard to know.
TL: What else is wrong with these measures?
JR: Hospital quality measures are still in the teenage years of their development. All these measures are imprecise, and that includes measuring readmission rates, infections, and patient experiences.
TL: So is the government saying, "Let's get this stuff out, even if it's not so great"?
JR: Implicitly, yes. But in their defense, you can't make the perfect be the enemy of the good. You can make a case that you can't get hospitals on board to be measured publicly if you don't start somewhere.
TL: Which measures are they starting with?
JR: This fall they are going to adjust payments to hospitals based on their performance on the patient experience surveys and process measures like "Did the patient get an antibiotic before surgery, or a beta blocker after a heart attack?"
TL: Most survey research-involving all kinds of services, not just hospitals-shows most people are generally satisfied with whatever is being measured. How will the Centers for Medicare and Medicaid Services (CMS) differentiate hospitals?
JR: Some patients are more satisfied than others. So CMS will determine some point in a scale, and those hospitals above will get a little bit more money and those below will get a bit less. It's like the race to the top for schools. And CMS gives credit for improvement, so underperformers can still get bonuses if their scores are getting better at a faster clip than are other hospitals.
TL: What kind of bonus will they get?
JR: The bonus is a withhold of one percent of their aggregate Medicare reimbursement. Medicare will hold back this one percent and then dole it out in the form of bonuses. So it's sort of a penalty and bonus at the same time. The amount withheld grows to two percent in the fall of 2016.
TL: Have the hospitals been squawking about that?
JR: For the most part they've given up complaining, because it's part of the law now. Instead they are focusing on trying to influence CMS about the choice of measures and the weight they give each measure in setting payment.
TL: Which measures don't they like?
JR: They don't like the measure requiring them to report their rates of hospital-acquired infections, especially because there's a separate penalty for hospitals that have higher rates. They consider that double jeopardy.
TL: Are there other metrics hospitals don't like?
JR: They don't like the metrics for readmission rates. There's concern that safety net hospitals that see poor patients can end up having higher readmission rates. Those patients have a harder time paying for medicine and following discharge instructions, and often don't have the social support structures to improve.
TL: What are the big teaching hospitals objecting to?
JR: They don't want to be compared to community hospitals, and the other thing is they don't think the risk-adjustment that CMS does use really distinguishes between sick and extremely sick patients.
TL: Are they correct about that?
JR: As we discussed, these measures are coarse. But to lose a lot of money, hospitals will have to do badly in multiple domains. If they're just a laggard in one area, such as patient safety, but above average in outcomes or patient surveys, it will balance out.
TL: Have consumers been using any of these measures-patient safety measures, satisfaction scores, and the so-called process measures like making sure a patient gets an antibiotic one hour before surgery?
JR: Most evaluations show that consumers don't use these data in selecting hospitals.
TL: Why don't they use them?
JR: A lot of things that put you in the hospital are immediate problems that don't lend themselves to comparison shopping, and consumers are directed to hospitals based on their doctors' preferences, their insurance coverage, geographic convenience or their general sense of a hospital's reputation.
TL: Are there any measures being used by consumers?
JR: That's a great question. I don't think a majority of consumers use any of them.
TL: That brings us around to our fellow journalists. How should they use these measures, if at all?
JR: Journalists can use them, but they should do so carefully. No one measure captures the overall hospital quality, and you should be careful about your comparisons. With the right context, I don't think there's anything wrong with using the patient safety indicators in stories.
TL: Can you give me an example where a news organization used them properly?
JR: The Dallas Morning News is a prime example of how to responsibly use the measures its coverage of Parkland Memorial Hospital. Texas has good discharge data, and the paper did its own analysis of patient safety measures, comparing Parkland to other large Texas hospitals. Patient safety measures were just one piece of the coverage. They used lawsuits and government inspection reports. So, bottom line: Patient safety measures can be a good piece of a larger mix, but they themselves are not solely definitive.
TL: What should journalists not do?
JR: They should not use them without talking to the hospitals, and they should be very cautious in comparing teaching hospitals to community hospitals. They should also make clear that they should not imply the safety ratings cover all the patients in a hospital. They cover only certain types of cases and accidents. They should also not assume that a hospital rated better than average by Medicare is in fact superior or trouble-free-those could be underreporting problems.
TL: So should they construct any rankings of hospitals based on the measures?
JR: It depends on the context. I wouldn't do the ten worst or the ten best hospitals.
TL: How should reporters use the mortality measures, considering that the latest research reported in Health Affairs shows they haven't reduced mortality?
JR: Just because publication of data hasn't led to improvements doesn't mean the data isn't accurate. The mortality data-rates of people dying within 30 days of discharge-is pretty good.
TL: Do you have any other advice for journalists?
JR: As complex as these measures are, they are great ways to get conversations started with a hospital executive you are interviewing. Also, when hospital executives say Medicare data is not accurate, press them to produce their own internal data to prove their data are more accurate than Medicare's.
TL: Think for a moment about reporters just starting to cover hospital metrics. What should they do?
JR: One thing they should do is drill down into the spreadsheets from Hospital Compare, because there's a lot more data than CMS puts on its website. For example, CMS publishes only the percentage of patients who rave about their hospitals, but the full data include the percentage of patients who panned their experience.
TL: Anything else?
JR: When you're working with patient experience data, you want to be very careful in comparing patient satisfaction in very different geographic areas. Patients in some areas like New York, Miami, and New Jersey are more likely to voice their complaints more freely, and hospitals in those areas are more likely to have lower ratings than hospitals in South Dakota.
TL: Should we be using the so-called process measures-like the portion of pneumonia patients receiving a flu vaccine, or the portion of heart attack patients receiving discharge instructions?
JR: I don't think the process measures make for great stories because they represent the minimal expectation for basic care. Most hospitals are getting a score of 93 or 94 or even 99 percent. It's not compelling to do a story that says a hospital has a three percent lower compliance on a measure than the average. Very few places are showing up as outlying poor performers.
TL: Then they may not be compelling for consumers either? Would you agree or not?
JR: Process measures aren't. But if you were considering a hospital that was rated worse than average in readmissions, mortality, infections, or even one of the patient safety indicators-and had the luxury of time to pick a place-you should ask the hospital to explain itself.
TL: What's your last bit of advice?
JR: If you're going to critique a hospital based on any of these measures, make sure the hospital is really a statistical outlier.
TL: Look into a crystal ball for a moment. Do you see hospital metrics as a path to journalistic glory in the future?
JR: Much of the good hospital reporting has been about horror stories of the malpractice kind, where patients are killed or maimed. This new age of hospital transparency should give journalists a chance to write about the routine quality of care that most patients are likely to receive. That may not be as sexy as a botched operation, but it's important to readers and everyone else if we're going to better understand what we're actually getting for some of the most expensive care in the world. Hopefully one day we'll be able to know which hospitals really deliver superior care, and which ones just advertise that they do.
Photo credit: Dion Hinchcliffe via Flickr
Q&A with Sarah Varney and Jordan Rau: Hospital Power, Insurer Woes
Trudy Lieberman on Medicare: The Most Important Health Story of 2012