How The News did the Texas hospital patient safety analysis
To identify rates of potentially preventable medical harm, The Dallas Morning News analyzed nearly 9 million patient-level records from hospitals across Texas. The analysis was based on Patient Safety Indicator software created by researchers for the Agency for Healthcare Research and Quality, part of the U.S. Department of Health and Human Services.
In related articles, the author explains:
To identify rates of potentially preventable medical harm, The Dallas Morning News analyzed nearly 9 million patient-level records from hospitals across Texas.
The analysis was based on Patient Safety Indicator software created by researchers for the Agency for Healthcare Research and Quality, part of the U.S. Department of Health and Human Services.
The News used the newest version of the software and the three most recent and complete years for which data is publicly available: 2007, 2008 and 2009. The software tracks 17 indicators. For two of them, it simply counts the number of observed complications.
For the other 15, it calculates an "observed rate" of how many adverse events occurred per patient at each hospital. It also calculates an expected rate, based on the mix of patients at each hospital, and how their outcomes compare to those of hospitals with similar patients among a national sample taken in 2008.
To allow fair comparisons among hospitals, a "risk-adjusted" rate is derived from the observed and expected rates.
The risk-adjusted rates put hospitals with radically different patient populations on a similar statistical footing. An urban hospital with large numbers of poor, sick patients, for example, can be compared with a suburban hospital with healthier patients.
The software uses eight of the risk-adjusted indicators to produce a composite score, intended to gauge a hospital's overall performance on patient safety. The composite is a weighted average, in which some indicators have more influence on the final score than others, based on their statistical reliability.
The software also generated state averages after analyzing all patients at large Texas hospitals as a single group.
The PSIs are based on administrative data, which are detailed records of inpatient hospital visits.
The Texas Department of State Health Services provides administrative data, which includes demographics, diagnoses, procedures, payment sources and charges, for nearly every hospital stay. The data omits information that could be used to identify patients.
Results for two of the indicators can vary significantly depending on whether a patient suffered from the problem when admitted to the hospital. Those indicators cover pressure ulcers, and post-operative deep-vein thromboses and pulmonary embolisms.
Texas did not begin collecting "present on admission" information until this year, so publicly available data do not yet indicate whether a patient already had such problems when admitted.
When such data are absent, the software imputes, or infers, whether the condition probably was present. This imputation is not perfect, however, so The News examined the potential impact that erroneous imputation
might have on its composite results by calculating the composite without one or both of the ulcer or circulatory indicators. Doing so worsened or left unchanged the standings of three of the four Dallas hospitals with the worst composite scores in 2009.
But the test runs improved the ranking of Methodist Charlton Medical Center, which rose from the second-worst score to the 12th-worst score. That hospital said it treats many nursing home residents, who can be particularly susceptible to pressure ulcers.
For its final analysis, The News used the composite as it was designed.
Several weeks ago, The News shared a detailed methodology with the 27 large hospitals from the D-FW area that were studied, along with their individual rankings on the PSI indicators and composite scores. It also shared the detailed methodology with the Texas Hospital Association.