Econometrics P4P Quality

How Missing Data affects Physicians’ P4P Bonuses

Pay-for-performance programs often offer bonuses (or penalties) for physicians, hospitals and other providers based on the quality of care patients receive.  Measuring quality of care, however, is often difficult.  For chronic conditions, for instance, many patients eligible for outcome measures may be lost to follow-up.  This issue can potentially affect provider evaluations and bonus payments.

To examine the magnitude of the attrition problem for P4P programs, a paper by Ryan and Bao (2013) examine a what would happen to a P4P program that measured patient readmission rates when one incorporates that patient follow-up data is not always complete.  To do this, the authors use a randomized controlled trial (RCT) called IMPACT (Improving Mood-Promoting Access to Collaborative Treatment) to generate parameters for a simulation. The IMPACT data include both a clinical registry used by care managers in the trial to document exposure to the intervention and to track patient outcomes (“registry data”) as well as longitudinal research interviews, which independently assessed patient outcomes at regular intervals (“research data”).

The authors use these data to examine whether individuals with missing data are more likely to be hospitalized in the future.  They calculate that “the rate of remission for those with missing registry data (0.232) and those without missing registry data (0.262) at the patient level. The difference between these rates (−.030) provides an estimate of the association between data missingness and remission (i.e., the effect of systematically missing data on remission).”

Using this and other parameters, the authors create a Monte Carlo simulation to examine how measured physician performance changes in the presence of missing data where performance is measured on a relative (80th percentile of remission rate among all providers) and absolute (whether providers exceed a remission rate of 30 percent) scales.   Their results are as follows:

We found that, over a range of scenarios, relative profiling approaches had profiling error rates that were approximately 20 percent lower than absolute profiling approaches. Also, most of the profiling error in the simulations was a result of random sampling variation, not missing data: between 11 and 21 percent of total error was attributable to missing data for relative profiling, while between 16 and 33 percent of total error was attributable to missing data for absolute profiling. This finding, however, is based largely on the fact that the missing data were not strongly related to the remission outcome in the IMPACT data, and a stronger relationship would amplify the relationship between missing data and profiling error. We also found that absolute profiling approaches were much more sensitive to error from systematically missing data than relative profiling approaches. Finally, the risk of profiling error was extremely high, approximately 50 percent, for providers whose true quality was in the immediate proximity of incentive thresholds, but decreased sharply to approximately 10 percent for providers whose true quality was 5 percentage points from incentive thresholds, indicating that the risk of profiling error is disproportionately borne by providers whose true quality is close to incentive thresholds.


Source:

Leave a Reply

Your email address will not be published. Required fields are marked *