What the ‘hotspotting’ disappointment teaches us about health care
(Photo: Oli Scarff/AFP/Getty Images)
We were not taught about “hotspotting” in medical school, nor did we learn about it in residency, but we didn’t need to. During every emergency department and medicine rotation we were sure to meet a small group of regulars. These were patients we cared for during their frequent admissions for asthma or heart failure or alcohol withdrawal or a variety of other maladies. We prescribed medications to aerate their lungs, shed the extra fluid and calm their tremors. When they disappeared from the hospital bed, we knew where they would be — on the street or at the local pub, ordering a drink with a hospital-issued wrist-band still in place.
It wasn’t until years later that we learned about innovative projects to proactively identify these patients – sometimes termed superutilizers — and initiatives that targeted these hospital regulars in an effort to improve their care and decrease health care costs. This approach became de rigueur among innovative health care systems after Atul Gawande published a 2011 article in the New Yorker about such a program in Camden, New Jersey, pioneered by Dr. Jeffrey Brenner.
After Gawande’s article, health systems and insurance companies around the nation scrambled to duplicate the Camden model. Gawande called the approach, in which a team of community health workers, social workers and nurses identified vulnerable patients and coordinated their outpatient medical care and social services in the hopes of improving their health and preventing avoidable emergency room admissions, as “hotspotting.” The reported impact of the program was eye-catching. According to Gawande, enrollees in the program experienced a 40% drop in hospital and emergency department visits, and their collective hospital bills dropped from $1.2 million to just over $500,000 per month – a remarkable 56% reduction.
But while the hotspotting approach received widespread attention in the lay and health care communities, there were quiet skeptics. Although the dramatic reductions in emergency department and hospital care seemed impressive, those who study care for complex patients recognized a key flaw in the analysis. Because there was no control group, it was not clear whether the Camden program actually improved patient outcomes. In other words, the same improvements may have occurred even without the program.
Though it might seem surprising that such striking changes could have resulted spontaneously, research has shown that emergency room and hospital utilization among superutilizers tends to subside over time due to a counterintuitive phenomenon known as “regression to the mean.” The term refers to a natural tendency for extreme patterns to stabilize over time. For example, sports teams that win numerous consecutive championships are likely to worsen over time for the simple reason that there’s nowhere to go but down. Similarly, unusually stormy weather patterns are likely to calm simply because calmer periods are more common.
For this reason, one would expect that patients with unusually high rates of emergency room and hospital utilization at one point in time will use less care in the coming years, regardless of the type of care they receive.
We expressed our “Slow Medicine” concerns about the hotspotting hype in a 2016 blog piece. In that article, we noted that “upon closer inspection, careful analysis of the Camden program and others like it raises concerns that these apparent jaw-dropping improvements may be overstated.” At that time, we and many others called for a rigorous randomized trial testing the Camden approach. Only by comparing outcomes among patients in the program with a control population would it be possible to determine if the program actually worked.
To their credit, the Camden team heeded that advice and, in partnership with a research team from MIT and the National Bureau of Economic Research, initiated such a study and the results were published last week in the New England Journal of Medicine. Disappointingly, the study revealed that hospital readmission rates for patients in the Camden program were no different than those receiving usual care.
What does this mean for complex patients who make frequent trips to the emergency room and hospital?
First, providing superb care to these patients remains an essential and worthy goal for our medical and social service systems. The most complex 5% of the population is responsible for 50% of total health care costs, while the top 20% are responsible for 80% of costs. We cannot let the disappointing findings from this recent study deter us from efforts to improve care for our sickest and most vulnerable patients.
Second, even if hotspotting does not significantly improve the efficiency of health care or lower costs, these programs may still be beneficial. Although the patients in the treatment group did not have lower hospital utilization rates compared to controls, there were signs that the program may have led to better care for some. For example, enrolled patients were substantially more likely than control patients to receive supplemental nutritional assistance and other important social resources.
And while the Camden model was not effective in lowering hospital readmissions, it is possible that other models would be. In fact, there is some encouraging data that other programs designed to improve care for complex, high-needs patients may have some modest but favorable effects on emergency room and hospital utilization rates. The most effective of these programs seem to “utilize coordinators who develop close relationships with patients and their primary care clinicians” and focus on hospital-to-home care transitions.
In the short term, however, we believe the latest New England Journal study should lead health care leaders to question whether their programs for managing high needs, complex patients are working as well as they think. Many large health systems and health plans spend considerable resources on case management programs for complex patients that likely do not deliver much value. We believe the resources for these programs should be shifted to other programs with a stronger evidence base.