What does a “statistically significant difference in mortality rates” mean when you’re trying to decide where to send your kid for heart surgery?

Keith Turner writes:

I am not sure if you caught the big story in the New York Times last week about UNC’s pediatric heart surgery program, but part of the story made me interested to know if you had thoughts:

Doctors were told that the [mortality] rate had improved in recent years, but the program still had one star. The physicians were not given copies or summaries of the statistics, and were cautioned that the information was considered confidential by the Society of Thoracic Surgeons. In fact, surgeons at other hospitals often share such data with cardiologists from competing institutions.

While UNC said in a statement that it was “potentially reckless” to use the data to drive decision-making about where to refer patients, doctors across the country said it was simply one factor, among several, that should be considered.

In October 2017, three babies with complex conditions died after undergoing heart surgery at UNC. In a morbidity and mortality conference the next month, one cardiologist suggested that UNC temporarily stop handling some complex cases, according to a person who was in the room. Dr. Kibbe, the surgery department chairwoman, said in a recent interview that the hospital had never restricted surgeries.

In December, another child died after undergoing surgery a few months earlier for a complex condition.

The four deaths were confirmed by The Times, but are not among those disclosed by UNC. It has declined to publicly release mortality data from July 2017 through June 2018, saying that because the hospital had only one surgeon during most of that period, releasing the data would violate “peer review” protections.

Other information released by UNC shows that the hospital’s cardiac surgery mortality rate from July 2013 through June 2017 was 4.7 percent, higher than those of most of the 82 hospitals that publicly report similar information. UNC says that the difference between its rate and other hospitals’ is not statistically significant, but would not provide information supporting that claim. The hospital said the numbers of specific procedures are too low for the statistics to be a meaningful evaluation of a single institution.

Seems like a lot of these data for UNC are not going to be easy for one to get their hands on. But I wonder if there’s a story to be told with some of the publicly available data from peer institutions? And even in the absence of quantitative data from UNC’s program, I think there are a lot of interesting questions here (besides the ethical ones about hospitals at public institutions withholding mortality data): What does a “statistically significant difference in mortality rates” mean when you’re trying to decide where to send your kid for heart surgery?

My reply:

Good question. I think the answer has to be that there’s other information available. If the only data you have are the mortality rates, and you can choose any hospital, then you’d want to do an 8-schools-type analysis and then choose the hospital where surgery has the highest posterior probability of success. Statistical significance is irrelevant, as you have to decide anyway. But you can’t really choose any hospital, and other information must be available.

In this case, I think the important aspects of decision making are not coming from the parents; rather, where this information is particularly relevant is for the hospitals’ decisions of how to run their programs and allocate resources, and for funders to decide what sorts of operations to subsidize. I’d think that “quality control” is the appropriate conceptual framework here.

Tomorrow’s post: What happens when frauds are outed because of whistleblowing?