Through partnership and under contract with Ramesh K. Shukla, Ph.D., at Virginia Commonwealth University's Department of Health Administration. VHI's goal has been to develop information that fairly compares hospitals' performance, mortality and readmission outcomes. Dr. Shukla directed the scientific research in developing a reliable and valid methodology for the cardiac care information. Because some patients may have more serious conditions or require more complicated treatment than others, it was very important to VHI to develop a methodology that adjusts for these differences in patient severity of illness and risk of mortality. After adjustments have been made to the data, it is clear that some hospitals seem to have lower rates of mortality or readmissions than others.
VHI tested mortality rate outcomes on an APR-DRG level. Most commonly, an APR-DRG was automatically excluded due to too few or no mortality cases being reported. Other APR-DRGs did not pass additional statistical tests. Overall, about 75% of the APR-DRGs had sufficient cases and passed the statistical tests. The decision was made to exclude those APR-DRGs not passing criteria in mortality rate calculations. All APR-DRGs in some service lines did not pass the statistical tests and are, therefore, not to be reported.
Development of these reports began with Virginia hospital discharges in calendar year 2000. Detailed statistical analysis and review followed to help identify variables outside the control of hospitals that might affect mortality outcomes. In 2006 a cardiac care expert panel provided provided input on the adjustment methodology and recommended adoption of using version 20.0 of 3M's APR-DRGs and a methodology for identifying 30-day related readmissions.
The following additional adjustments were first applied to 2004 discharges for publication in September 2006:
VHI collected data from all of the hospitals in the state of Virginia for Cardiac Care and then ran statistical tests to determine the validity of the 3M APR-DRG system for classification and analysis of clinical risk assessment. The tests were run using SPSS statistical analysis software under the direction of Ramesh K. Shukla, Ph.D, of the Williamson Institute. Tests included analysis of face validity, paired correlations, and quantitative analysis of confidence intervals. After working through these statistical techniques for the three service lines, the methodology was adopted.
When a doctor evaluates a patient, a wide variety of information may be used to make a diagnosis and recommend care that is needed. These indicators of condition may include age, gender, current illnesses, family history, the results of blood and other tests and other information that the doctor obtains. The information VHI uses was derived in part from the findings of hospital care and includes the age of a patient as well as the gender, diagnoses and diagnostic and surgical procedures performed. All these factors are considered when VHI compares hospitals based on their patients.
Four statistical tests were employed to determine if an APR-DRG was statistically sound to be included in the population used to calculate expected mortality values using the 3M APR-DRG methodology. The tests were as follows:
The 3M APR-DRG method ranks patient mortality in subclasses one (1) through four (4), with four having the highest risk of mortality. This test asks if there is a positive increase in expected mortality rates as one moves through the subclass rank order. For example, an APR-DRG had 400 cases in Virginia and 100 of them were rated as a risk of mortality subclass 1. Ten percent of these cases were inpatient deaths thus had an expected mortality rate of 10%. The same APR-DRG had 100 cases categorized as risk of mortality subclass 2. These 100 cases had a mortality rate of 20%. Continuing on, there were 100 cases each in subclasses 3 and 4 with mortality rates of 30% and 40% respectively. Out of six measurements, this APR-DRG passed all of them, each lower subclass had fewer deaths than any of the higher risk of mortality subclasses. If an APR-DRG failed two or more of these comparisons (for example, subclass 3s expected mortality rate was less than subclass 1 and subclass 2), then it would be excluded.
Small populations can reduce the reliability of a prediction or conclusion. Therefore, a sample size of at least 11 cases was set as the minimum for each APR-DRG risk of mortality subclass. APR-DRGs not meeting this sample size for each subclass were excluded.
Whether a patient died or lived at discharge is a binary outcome (two possibilities). The c-statistic is used for evaluating the efficacy of a method that predicts a given patient will or will not experience the event of interestin this case, death. A c-statistic value of 0.5 indicates no ability to discriminate whether patients will die or not while a c-statistic of 1.0 says the model can perfectly predict whether a patient will die or not. An APR-DRG was considered acceptable if it had a c-statistic greater than 0.6.
Using Discriminate Analysis Using a calculation called canonical correlation (CR), which is the proportion of the total variability explained by differences between groups, this analysis measures the association between an APR-DRGs expected mortality rate and the actual mortality rate. If an APR-DRG had a CR value less than 0.2, then it was excluded.
Using 1997 data, 354 out of 357 DRGs were tested. (Two APR-DRGs were ungroupable and one APR-DRG had zero cases). APR-DRGs had to pass the first test for monotonic properties. If they passed the first test, then they went on to the second, third and fourth tests. If an APR-DRG passed three out of four tests, then it was recommended for inclusion. 74 APR-DRGS did not pass the first test for monotonic properties. Out of the remaining 280 APR-DRGs, 259 (73%) passed three out of four tests. All APR-DRGs utilized in the Cardiac Care study passed all four tests for validity and further supported the public release of cardiac care data.
Click here to view a step-by-step example of how mortality is calculated.
Click here to view a step-by-step example of how related readmissions are calculated.
Click here to view the list of APR-DRGs included in determining related readmissions.
Updated on: 3/5/2019