Behind the CBC’s numbers

This week we’ve all heard and read a lot about the CBC’s new Rate my Hospital website. Most of the focus has been on letter grades—from A+ down to D. But behind the nice wrapping that a letter grade gives us, there’s a lot of math going on. And math is what Decision Support excels at. Our job is to give operational, clinical and information support to VCH decision makers through a variety of services.

The decision support team, in consultation with our peers at CIHI (Canadian Institute for Health Information), have looked at the CBCs statistical methodology. By looking at how they derived their numbers we can better understand the letter grades.

Where the numbers come from

The main point about the CBC site is that they are using CIHI data for most of their data set. CIHI did not partner with CBC on this initiative, but they did respond to many questions from CBC about their publicly-released indicators, analysis and methodology. CIHI did not review or approve any of the content produced by CBC.

In assigning letter grades to over 200 acute care facilities, the CBC used up to five indicators derived from CIHI’s Canadian Hospital Reporting Project (CHRP). Given that there are two pairs of indicators, there are really only three things being considered. Why these indicators were used to the exclusion of others is not obvious:

  • Mortality after major surgery
  • Surgical readmission
  • Medical readmission
  • Nursing sensitive adverse events – medical
  • Nursing sensitive adverse events – surgical

The math is rather complicated and certainly not worth spending a whole lot of breath trying to debate. However, it looks like this to me. They have given each of the five metrics a score based on the number of standard deviations from the peer group mean.

If you were an average hospital, you got a zero. One standard deviation good you got -1. One standard deviation bad you got +1. For each metric, a -1 earned you an A+ while a +1 got you a D. To put them together for the overall hospital grade, they then just averaged the standard deviation scores.

This is where I have questions.

To give you an example, suppose you were really bad on the two Nurse Sensitive Adverse Events measures—as we are. In fact, suppose you were so bad you were 2.5 standard deviations out on both. (We were actually about 2 standard deviations high, but I’m simplifying the example.) And suppose you were completely average on the other three. 2.5 + 2.5 + 0 + 0 + 0 = 5. 5 divided by 5 metrics gives an average score of 1. You would end up with an average score of +1, which would get you an overall D. So, in this scenario 3 Bs and 2 Ds would give you an overall D. So, you would be called one of the worst hospitals in Canada all because you have a very real problem with urinary tract infection—in both medical and surgical areas—which drives Nurse Sensitive Adverse Events.

One might have expected the 5 metrics to each be weighted equally—about 20%. But, that is not the way their scoring works. For that matter, most rational people would not rate surgical mortality as equal in importance to surgical readmissions. It’s probably more important that you don’t die than that you were readmitted for further care. But, while we disagree with the weighting used in the rankings, that’s a minor quibble, since even experts in the field would have difficulty doing that so we can’t expect CBC to get it right either.

What it boils down to is that if you are an extreme outlier on one or two metrics, they will drive the overall result.

Comparisons are not always what they seem

It is tempting to think that it is always obvious whether higher values are better or worse. But that is not always true.

For example, there is little question that high rates of surgical mortality are bad, especially if you have adjusted for case mix. We can be 100% certain about surgical mortality. The patients are either still with us, or they are not.

However, readmission may look like a hard apple-for-apples comparison between facilities, but unfortunately these numbers can be easily distorted by the amount of beds in a facility and its corresponding ability to admit patients. In effect, fewer beds relative to population means that you admit only the most sick patients, which means they are more likely to need to come back and your readmission rates will be higher.

Again an example might be helpful. Suppose two hospitals admit exactly the same numbers of patients for the same things and one in four or 25% of them of them come back. Then suppose the second hospital also admits 20% more comparatively light patients and none of that group come back. Their overall readmission rate would go down to 20%. Most people, including CIHI and the CBC, would say that the hospital with the 20% rate was doing better than the hospital with the 25% rate. The reality is that they both performed exactly the same on the first group of patients, but the second hospital admitted a lot more ‘easy’ patients. So you could either argue that their performance is really the same or that the first hospital actually had better overall results. The CBC rating says that the second hospital is better—on two out of only five measures.

Comparing to our peers

So given all of this, the one last question this begs is can we compare how we performed against our peers, Canada’s other large teaching hospitals. Well, on the surface the picture isn’t great. VGH is the only teaching hospital in Canada to be rated as a D. But again, when you look at the data some things stand out.

On a few of the metrics some of the hospitals reported zeros. In fact the two A+ rated facilities reported zeros for the mortality after surgery. So a failure to report can give you an A+ to skew the average. Of the 21 that did report, VGH was 5th best. We do need to acknowledge and deal with the fact that VGH was last and second last of 23 on the two Nurse Sensitive Adverse Events measures. VGH was 15th of 23 on surgical readmits and 22nd of 23 on medical re-admissions—if you ignore the issue of how many patients each hospital tends to admit in the first place.

We know from more detailed reports that we have exceptional results on mortality measures, on how fast a patient gets to see a doctor in the ER, on how long patients stay in hospital when they need to move on to other care, and many other things that CBC chose not to include in their ratings.

The other issue with peer group comparisons is that the way that CBC did their scoring. In their approach, all hospitals are rated based only on how they compare against their Canadian peer group. The rating has nothing to do with international comparisons or comparisons against any standard of best practice evidence. Because in their approach—for each of the measures and overall—most hospitals will get a B, and there will be a few As and a few Ds.*

What that means in practice is that it is possible, in theory, that every hospital in Canada could be completely compliant with best practices, and be considered in the top group when compared internationally, but on the CBC scores the lowest of the Canadian hospitals would still get Ds. Of course that also means that it would be possible that every hospital in Canada could be delivering substandard care, but the CBC method would still say that the best of the Canadian hospitals would get As. Again, this reality runs against the intuitive interpretation—fostered by the CBC—that As show good care and Ds show bad care.*

In the end

As Patrick O’Connor stated in his blog post, the facts behind the numbers are still true. The areas shown in the CBC data are the areas we’re trying to address. But the math does show that the letter grades assigned to each facility are nothing to get too worried about. They make great news copy, but they don’t really tell you much about how your hospital is performing.

 


*This paragraph was updated on April 12th to include new information.