A brief heads-up on this post.  I am going to talk about numbers and in an effort to not get bogged down in a lot definitions that will detract from the essay, I am putting some definitions in the glossary blog post and simply highlighting the term here.  If I miss one that you would like to have defined, please call it out in the comments.

We are between Halloween and Thanksgiving.  In Wisconsin gun-deer1 season starts on Friday.  I just noticed a thin layer of ice on part of the lake, so soon the parade of eager ice-fishermen will begin their daily commute to the boat landing to see if the ice is thick enough to drag their sleds and five-gallon buckets out, drill a hole, and try to catch a fish without catching pneumonia.  But in this house, all this really means is that Katie is slowly and surreptitiously unpacking Christmas decorations.  For example, as I type this, I notice that, without fanfare, two Christmas trees now sit on a shelf.

All of this has me thinking of organizations and their dashboards, pillar metrics, huddle boards, and every other display of data.  Organizations love their fiscal year targets, leading indicators, and key performance indicators (KPIs) with their inscrutable definitions and microscopic font, festooned with red and green dots.  Someone somewhere thinks that these displays have value, but all I see when I look at them are the world’s worst Christmas decorations.  And for those of you who feel superior, because your dashboards have YELLOW dots as well, I must regretfully inform you that to corporate eyes yellow dots are just red dots.  Sorry.

Let me preface this essay on the pointlessness of most dashboards by saying that I am a data nerd.  I think all those data points can tell us useful things about operations.  I am a firm believer that we can translate behaviors and perceptions into useful numbers.  I think that a lot can be gleaned from thoughtful analysis.  My problem is that these displays generally boil the data down to the point that they have the interest, appeal, and value of overcooked green beans. 

Further, before I hold forth on what my vision is for a data-literate organization, I also want to call out a reality that those of you in the PX space are very familiar with.  While you may be experts of patient experience in your organization, this subject matter expertise often does not extend to having control over what your senior leaders look at or demand to see.  I know that you cannot unilaterally change a data display without suffering slings and arrows from everyone whose cheese was unceremoniously moved.2  As you read this, keep the serenity prayer in the back of your mind, identifying the things in your data displays that you cannot change, the visuals you can and the wisdom to navigate this universe.  Once I finish my cathartic rant, I will provide things you can do to help nudge your organization in the right direction.

Here are a few things that absolutely drive me crazy with these data displays.  All of these can be boiled down to one overarching concern: because the data is reported in a uniform way, (usually a percentage followed by a green or red dot) all of these data are treated equivalently, even though its providence is divergent.  All issues of accuracy of source or levels of potential human error are papered over and the data appears to have uniform weight and value.  Therefore, while these differences often affect the ability to understand and address the data, it is often overlooked, or not fully understood, by the viewer of the data. 

Size of the percentage.  Most of this data is a score, framed as a percentage, being compared to a goal, framed as another percentage.  A percentage is the world’s first way of standardizing measurements, as it frames an observed difference relative to the universe it finds itself in.  So, missing a target by 1.0% on a finance measure, a service measure, or a quality measure is seen as equivalent.  But there are two issues with this assumption. 

  • The first is the kind of percentage you are looking at.  Some percentages are open-ended, and some have a ceiling.  For example, you can have 125% more costs than expected, or 200% more surgical site infections than desired.  But other numbers have a limit.  For example, the percentage of patients who gave you your desired answers has a ceiling, as it cannot be greater than 100%.  When you have a ceiling, it changes what that number and gap means, especially if you are close to the ceiling to start with.  Imagine that you need 92% of your patients to give their clinic provider a 9 or 10 to meet goal.  You currently sit at 87%, so everyone sees your gap to goal is 5%.  But in real-world terms, that 5% needs to come out of the (100%-87%) 13% who are not currently giving you the desired response.  This means that to meet this 5% improvement you need to convert (5/13=.385) 38.5% of the remaining patients.  This obviously reframes this conversation.
  • The second is that the same percentage represents different realities that affect the conversation about how to address any gaps.  That 1.0% gap in finance may represent three-hundred million dollars.  At a hospital, that 1.0% probably represents less than one adverse drug event.  In patient experience data, based upon margin of error, that 1.0% is essentially the same as the goal.  The actions associated with closing these gaps are again much different.   

The complexity of the measure.  All the percentages are about getting compliance with a financial, service, or quality behavior.  But the nature of that compliance can vary greatly.  Being compliant with a colonoscopy means simply getting a colonoscopy3 but being compliant with optimal diabetes management means getting a bunch of merit badges (having two a1c measures below target, having an eye exam, having a kidney function test, not smoking, taking a statin if prescribed, etc.) so being compliant here is complex and there is no partial credit for having almost all of those boxes checked.  You are either in compliance or non-compliant.  In PX, books have been written trying to explain the difference between an 8 and a 9.  Heck, libraries are full of books explaining why a 9 is good and an 8 is bad.  Missing a goal because you have too many patients giving you an 8 is a much different problem than missing because too many patients gave you a 5.  But in the eyes of the dashboard, both come up as simply non-compliant. 

The precision of the target, relative to the real world.  If you work at a large hospital with plenty of patients and lots of opportunity to do the right thing versus the wrong thing, percentage goals can be more forgiving than if you work at a critical-access hospital (CAH) where you don’t have many opportunities to execute appropriately.  So, one surgical site infection when you perform five thousand in a year is no big deal (except for the patient who got it), but if you only perform five hundred, that fall-out can crush your entire year’s goal.  This means that for many CAHs, their volumes may give them a smaller margin for error and require them to be perfect, even as they often have fewer staff and resources than the larger hospitals.  Now, small hospitals need to be as focused on best practice and safety as a large hospital.  But, if your goal is to be 95% compliant with a measure, and you only have seventeen cases in the denominator, you CANNOT be 95% compliant.  You can only be 100% compliant or 94.1% compliant with one fall-out.  Again, these are real problems befalling real patients, but when you see four CAHs fat-shaming a fifth even though they all had one fallout, but the others had twenty in their denominator, you realize that people don’t understand what that percentage really means.  This also affects PX data, as a CAH may only get five or six HCAHPS surveys back in a month, but if their goal is to be at 87% top-box, their only solution is to be perfect every month. 

I could go on, but in the interest of getting to some suggested solutions, we can just call this dead horse sufficiently beaten and call the pathologist.

A dashboard captures a snapshot in time.  This is fine if the only question is “Are we meeting or not meeting our goal?”  But if you are not meeting your goal, it doesn’t give you any deeper understanding.  Even if it contains leading indicators or KPIs, these often make things more confusing.  When I had to manage tactics (which was the organization’s language for KPIs) it seems I spent the entire week following the dashboard release explaining why we were meeting a KPI, but not the pillar goal, or, even worse, meeting goal, but not the KPI.  If staff don’t see a correlation with their eyes and their eyes always lie, they stop working on the KPI.  Even dashboards that try to include some broad sense of trend (previous quarter, or fiscal-year-to-date) don’t really capture a useful trend.  Overall, dashboards do not contain context.  When everything is green, context might not matter (though sometimes it does), but when dashboards are red, it doesn’t provide direction and an organization is likely head off in eight different directions, or get stuck in analysis paralysis and do nothing.

So, for those who feel like they have no control over the content of the data they are called upon to present, here are a couple of things you can do to work within the confines of what is demanded of you.  Consider these as either asterisks you apply to the data in a PowerPoint or document, or, talking points when you are asked to present the data.  The point is that you are delivering on what is asked but providing a bit of embroidery.

The dashboard data has many sources, and the timeframes of those sources can vary, due to how long it takes for the data to be final or how long it takes a human being to process the data into a useful display.  So, the observer can be confused about the what and the when of what they are looking at.  The November Dashboard is probably composed of October data, which may in some cases go back to September.  For example, given the sample window for HCAHPS as set by CMS, a patient discharged on September 30th would have until the middle of November to have their survey completed and included.  Depending on the rules of your dashboard, then, November’s data is probably actually September’s data.  Just because it is the most recent dashboard doesn’t mean it is the most recent data.  To manage a leader’s response, make sure you label and call out the time period for the reported data.  This will help leaders determine how long ago the milk was spilled and you can explain what you have been doing to clean it up.

Likewise, the data exists as a point in time.  A dashboard may contain current-month and year-to-date scores.  They may have other trends, but they generally don’t present useful trends in the data.  Leaders who don’t live in this data will overreact to any single outlier, be it good or bad.  Putting that horrible or awesome outlier in context will save you and them some stress.  More importantly, call out things you can identify or control.  Often this does not conform to the traditional fiscal year.  For example, comparing the first eight months of this fiscal year over the same period last year, or simply extending a graph of scores beyond the current fiscal year can be helpful.  Look to your work or other potential reasons for change and highlight them.

  • “We rolled out a training refresh six weeks ago and the early results show…”
  • “Since the change in ED leadership five months ago, we see the data…”
  • “When the Med/Surg unit implemented their action plan at the beginning of the fiscal year…”

This helps reframe your leaders’ understanding of what is important, what is not, and what you are already working on.  Even if the retraining has had no effect, or the change in ED leadership has led to a decline in scores, this context will provide a more robust and useful conversation.

The other thing that dashboards lack is a narrative.  While people will say that the data speaks for itself, that is never true.  Too often, the data tells different stories to different people.  It is the siren song for those who feel like better times are just around the corner, and it is ominous foreboding for those who perennially wait for the other shoe to drop.  When the optimist and the pessimist meet to discuss the data, the fireworks can be impressive.  So, tell the story that the data is suggesting.  Building a narrative won’t prevent a storm, but it is always handy to have an umbrella. 

Telling a story may seem obvious but is not necessarily something that PX people can do.  Even those of you who love a good anecdote about a patient conversation sometimes struggle to tell an audience what matters in the data.  This is more than saying “this month is better than last month” or “this year is worse than last year.”  It is not simply talking about directionality but about meaning.  Many of you fear the role of story-teller because there is always that one difficult doctor or nurse manager or CFO who relishes throwing darts.  This can easily deflate you if you don’t spend time to outline the story you want to tell.  It doesn’t need to be a far-ranging epic; it just needs to be more than a black box

For those who have opportunity or audiences where you can craft the message a bit more, I want to also talk about the next level of dashboard management.  But I blown way past my self-established blog length, so I will have to leave you with a “To Be Continued” cliffhanger.  More to follow…

1If you think that the term “gun-deer” is obvious or redundant, you are not aware of the myriad ways to kill a deer in Wisconsin, as we have muzzle-loader deer season, bow deer season, the antlerless hunt, youth hunt, gun deer hunt for hunters with disabilities…

2I love a good mixed metaphor, so sue me.

3To be clear, this measure can be a bit fuzzy as well, depending on whether you can count a Cologuard® as compliance, or if age or a concerning issue in a previous colonoscopy changes the timeframe for the test.  Still, the measure is simply sticking a camera up a butt.

Leave a comment