When I was working on the dashboard essays, I ended up with an extended tangent addressing my concerns with system pillars.  I ended up cutting it because it was, well, a tangent.  I have expanded on my concerns here.  It is never my intent to write two-part essays, but sometimes it just works out that way.  And because there is no more ubiquitous element in any hospital than a pillar, perhaps the subject deserves it.  The second part will focus on the data itself, but this essay will focus on the how organizations misunderstand what pillars are saying.

Every hospital that I have ever worked with sets broad system markers or goals for their operation.  Some call them tenets or foundations or standards, but the word I most often see (and will use here) is pillars.  They usually involve some combination of Quality (best-practice medicine), People (retaining quality staff and effective hiring), Finance (are we being good stewards of our resources) and Service (are we making patients happy.)  There may be Growth or Community pillar in the mix as well.  They are generally aligned with the system mission, vision, or values.  Consider the mission as aspirational, the pillar as concrete.  The pillar measures then tend to be actual numbers documenting past performance—a balance sheet, a head-count, a percentage of total cases. 

Before I launch into my concerns over the philosophy and data of pillars, their presentation, and how they are used, I need reaffirm my I love of data.  I think successful organizations succeed in part by having a keen eye on past performance to determine if the organization is headed in the right or wrong direction.  Further, most organizations are sufficiently complex so that one cannot include EVERY useful number in a pillar display but instead need to pick and choose the data that they determine is most important.  So, this is NOT a rant about how horrible pillars are.  It is instead a rant on how when one doesn’t dedicate time and attention to their creation and doesn’t review them to assure that they continue to be useful, pillars can end up doing more harm than you might expect.

Most of what follows has its origin with one key concern.  My biggest data pet peeve (and I have a lot of them) is when someone says, “The data speaks for itself.”  Data never speaks for itself.1  Even if you had a thousand cases and 100% of them pointed in one direction, it doesn’t necessarily say anything.  In fact, if I saw a large dataset with that level of unanimity, I would assume:

  • There is an error in the data
  • It is a crappy measure because it doesn’t contain variance
  • You are measuring something that doesn’t need measuring

Every number needs a narrative, because if you don’t provide a story for the data, people will make up a story for themselves.  Usually that story will support a previously held belief.  This is compounded by the fact that the audience for that data may not have the knowledge base to understand it, which is why they rely on those previously held beliefs.  For example, my knowledge of finance is limited.  In the pillar’s case, I will assume that if the data shows a profit, we are good, but if it shows a loss, we are not.2  I will then credit the success to the things we are doing that I like, or blame the failure on the things we are doing that I do not like. 

I am not saying that data points have no meaning or that any story can be concocted.  But like so many things, if you don’t define something, everyone will define it for themselves.  For example, by 2020, the national 75th percentile for Overall Hospital Rating on the HCAHPS survey had improved nationwide every year for seven years. When COVID hit, though, all that progress vanished in eighteen months, wiping out all that improvement.  The score bottomed out, it has now steadily started climbing again over the past few years, but it is not quite where it was pre-pandemic.  Imagine your organization trending like the national 75th percentile.  Your current score is lower than it was five years ago, but up from where it was three years ago.  Is this a good story or a bad story?  Without providing a narrative, half of your audience will think it is good news, and half will think it is bad news.  The problem is that, as an organization, it is hard to establish a corporate focus, when half the room is saying “stay the course” and half of the room is saying “tear it all down.”  The following points, then, will show of how not providing a narrative or even a context to the data can lead to confusion, arguments, and bad decision-making. 

What was, what is and what will be.  Unless your pillar engages in forecasting, what you are looking at is old news.  Important old news, but old news.  In fact, depending on the cycle for updating it, it could be very old news.  Take an often-used metric in Quality—thirty-day readmission.  Let us assume that the pillar is published on the 10th of the month.  So, the December pillar cannot contain November data for readmission because most of the patients discharged in November are still in the window for possible readmission.  This means that the best the December pillar can do is show October scores.  Even worse, the current window for participation in the HCAHPS survey is now 49 days.  This means that it is September patient experience data reported in the December pillar.  Depending on the measurement windows, the efficiency of the data analytics department, and the organizational appetite for post-publication changes, the pillar’s data can be as much as sixty days old.  It is best available, but if an organization cracks the whip on December 11th, they cannot expect to see the effects of that motivation until February or March.  The problem is whether those shot-callers are aware that they should not expect to see changes on the January 10th pillar.

PAST performance is not indicative of future performance.  Most pillars do very little forecasting, either because the organization puts a high premium on FACT, or because no one wants to own a forecasted number that turns out wrong, or, because no one in the organization really knows how to do forecasting.  But most of the data on a pillar has ebbs and flows, either because of the nature of the data, or because of human behavior. 

I was working with an organization, helping them manage some population health data.  Every year, they would be meeting their cancer screening metrics until the final quarter or two, when the bottom would drop out and they would miss their goal.  For those not familiar, a mammography measure will require a woman with normal risk in the age window to get a mammography every two years.  Imagine that your fiscal year is the calendar year, so by December 31st you need to have 75% of all eligible women compliant with a mammography screening.  All year long the number is in the low-80s, but starting in October, the number starts dropping and by the end of the year, you are sitting at 70% compliance, missing the goal.  Why?  You did nothing different, but something in the data did change.  There were several women who got their mammography in the final quarter of CY2023.  So, all through CY2025, they were compliant, but in October of CY2025, their mammography from October 2023 dropped off and they suddenly became non-compliant.  This hurt the organization twice.  First, they thought that all was good until it wasn’t.  Second, by the time they started to see the cracks, they did not have the time or the scheduling slots to screen the suddenly non-compliant patients. 

But why was there this bolus of patients backloaded into the fiscal year?  Because five years ago, when the organization thought it might miss its target, they pounded the phones and got women scheduled in the final three months of the year.  Next year, they did the same thing.  Now they have created their own vicious cycle, as they created a large group of women who won’t need their mammography until the final months of a fiscal year.3  In trying to beat the measure, they bent the curve.  The only way to fix this Groundhog Day scenario is to bend the curve back and this is much easier said than done for a host of reasons.  In fact, the easiest way to bend the curve back is to just to book the noncompliant women in the first quarter of the next year.  You blow a hole in this year’s measure but will set next year up for success.4  These patterns exist in almost all the population health data, but unless you look for them and address them, you will continue to feel helpless as your leaders look at the pillar and scream at you for dropping the ball in the final few months of the year, just like every year.

Quantity versus Quality.  The organization may have four or five pillars, but they are comprised of several measures and the actual pillar performance is a summary of success or failure on each of these constituent measures.  For example, Quality may have five measures for inpatient, another seven for outpatient and four more for emergency department.  The expectation is not that an organization will meet all sixteen of these measures but instead that the Quality pillar will be green if the organization meets at least 75% of them.  This creates a cushion but also a conflict between the those who want fewer measures in a pillar and those who want more measures in a pillar.

For the less-is-more crowd, the upside of having fewer measures means fewer action plans and more concerted focus on a smaller set of more important variables.  The downside is that there is a smaller margin for error, as just one hiccup may doom you to overall failure.  For example, a smaller hospital may only be allowed to have two falls-with-injury in a fiscal year to stay green.  If they get two in quick succession in the second month, they mathematically cannot recover.  It is hard to stay focused on something when you know you are toasted with ten months to go.

For the more-is-more crowd, the upside to having a bunch of measures is that you have more opportunities to succeed and a greater margin of error, so you can fail at one measure without destroying your chances of overall success.  The downside is that, since most humans cannot juggle multiple initiatives at once, performance on all measures will be uneven.  Success in this case is often predicated more on luck than on a consistent effort.  That luck may run out the following year and can turn a high performer into a failure based at least in part on the luck of the draw.  So, like failure, your success will be less meaningful long-term.  When you have multiple leaders all jockeying for success, these transitory wins and loses can lead to what I call fat-shaming the data. 

This is why a lot of organizations will have highly variant performance on measures.  They will win on colorectal cancer screening one year.  They take that win as evidence that they can move on to some new measure the following year.  They then learn that their success was somewhat lucky and their loss of focus cost them any gains in colorectal cancer screening the following year.  Fewer metrics are better.  Of course, this means picking the most important and most influential measures.  Sometimes organizations have a large suite of measures precisely because they don’t know how to differentiate between the useful and the superfluous. 

In the end, if the organization cannot build a narrative around how to motivate using old data, why the data moves as it does, and whether the selected measures are really the most important ones, is it any wonder why they struggle with consistent success?

1I know data is a plural noun, but typing data never speak for themselves just feels wrong somehow.

2Hey, finance people, as you roll your eyes at my simplistic understanding, remember that I own my knowledge deficit.  Most in your organization don’t understand it any better than I do, and they probably don’t exhibit my level of self-awareness.

3There is additional issue here—the fact that the nature of the data doesn’t forecast its own change.  I will talk about that next time.

4This of course may seem more like playing a game than addressing core needs of your community.  I will not argue with you on that point.

Leave a comment