OK, so when last we saw our hero, he was clinging to a sheer rock face, looking down at his crushed Aston Martin in flames, thinking, “Well, there goes my security deposit…”
As I mentioned in Part 1, people are at varying levels of their organizations and are therefore differently able to affect an organization’s approach to data management. This section is dedicated to those who have a bit more control over what they display. But most PX people likely present in front of other audiences, like doctors, nurses or other interested staff. Though ‘interested’ is, at times, a stretch. So, as I mentioned in Part 1, the trick here is to look for ways to share this information at the margins. Often, once you prime the pump, your leaders are likely to want to come back for more. Whether you WANT them to come back for more is a conversation best had between you and your mirror.
I will, again, preface this with an acknowledgement that not everyone reading this is a numbers nerd. But most of this can be adjusted to fit with your comfort level and if you need help, it is always good to make friends with someone who has better Excel skills than you do or has access to data that you don’t. Plus, work with your PX vendor. All the ones I have worked with have teams who can help you maximize your use of the data. Don’t be afraid to ask. All vendors spend time internally discussing how to build better relationships with their clients, so leverage that to build a better relationship.
In the end, remember this fact. While hospitals are awash with data, most have virtually no ability to harvest or analyze it. Something that you might think is simple or obvious is likely to be surprising and valuable to your audience because they aren’t connected to it like you are. Even hospitals with analytics departments often are so busy processing data and generating simple reports that they don’t get to work on anything more complex. So, you will be scratching an itch and if you have an analytics department, you have built-in a group of people who, as mentioned in the previous paragraph, can become best friends.
One thing that is both illuminating and available from your PX portal is the frequency for a question. Not just the top-box score, but where all patients fall along the scale. It is easy for staff to live in a black-and-white world, thinking that if the patients don’t love us, they must hate us. But when you show an audience that if the patient is not giving you a top-box response, they are likely giving you a second-box response—if it isn’t a 9 or 10, it is likely a 7 or 8, if it isn’t an ALWAYS it is probably a USUALLY—it softens the audience. You are not there to tell them that they stink; you are there to tell them that by shifting a few folks from ‘like’ to ‘love’ we can meet goal. I will start every presentation with a simple percentage of all responses to the targeted Likelihood to Recommend or Overall Rating question because it almost always reduces defensiveness and opens them to a solution-oriented conversation.
Another thing I have found surprising is that leaders in a hospital or system may know their overall performance score but have almost no understanding of what the constituent pieces of that score are. When the score is RED, they assume the problem is everywhere even though it is usually a targeted problem. Do your leaders know what the score is for each of their discharging units? Do nursing leaders know what units have the highest and lowest nursing scores? Most vendors will have some displays that break performance out by unit, but even if leaders see this, they may not process what it all really means. They may see everyone as RED, even though there can be distinct differences in the data; one unit may be improving and very close to goal and another may be a dumpster-fire. Obviously, the approach to helping each unit is decidedly different. Therefore, providing additional granularity can be very powerful in helping focus attention where it is really needed and avoiding the needless suffering and confusion as high-performing units get thrown out with the bathwater.
It can be further valuable to provide the number of surveys (or N-size) that comprise a score. Leaders can squander their credibility by screaming down staff1, not realizing that their 50% performance is based upon two surveys. Moreover, providing this across all nursing units can also help focus attention where it is really needed. Knowing that your cardiac telemetry unit has four times the surveys as your pediatric unit means that the overall hospital score can be more quickly and effectively improved by focusing on cardiac telemetry, even if their score is higher because they make up a larger percentage of the total surveys. Yes, I know, all patients and their experiences are important, but in an environment of limited resources and limited attention, prioritizing makes sense.
The N-size can have an interesting side-benefit as well. Once, in presenting hospital-level data, the group noticed that the percentage of OB discharges in the data did not match the percentage of OB discharges from the hospital. In other words, they should have been getting at least twice as many surveys from OB than they were, based on discharge volumes.2 This started a side-conversation about how to improve response rates especially with new moms. These conversations are about more than simply improving scores, but also about problem-solving broader issues like participation or engagement.
Another way to increase focused attention is to identify useful patterns. In the last essay, I spoke about identifying trends. Most trends are just patterns in the data by time. The discussion of granularity likewise was focused on identifying patterns in the data by discharge unit. But time can be used in ways other than simply graphing data from January 2021 to the present. In working with Emergency Departments, for example, I have broken the data out in several ways to answer questions or identify trends. For example:
- For a hospital with seasonal tourism, you can break the ED data out by month or season: grouping all summers from 2021 to present and comparing it to all the winters from 2021 to present.
- EDs often think that their data is driven by day of week. Scores are great on Tuesday, but crater on Friday and Saturday. You can group all the Fridays together and compare them to all the Tuesdays.3
- Likewise, some EDs think that mornings and evenings are problematic while afternoons are less so. Breaking the data out by daypart can help foster a good conversation.
- If you want to get super cool (and the data will support it), you can build a heat-map using daypart and day-of-week. I have done this and have been able to help EDs develop strategies to learn from the hot-spots and address the cold-spots.
There is often a conversation about how hospitalists or locums or Nurse Practitioners perform relative to other doctors. The same underlying logic works here as well, if you classify each doctor in the data. You may notice that some of this demands the ability to KNOW what a provider is or at least be able to link the data to a list of classifications. This could be a great place to lean on the aforementioned friends. In one place, the head of the hospitalists was so interested in this, he used his clout to open doors for me to get help to do this work.
Another version of this is to use the data to build a proof-of-concept. If you have a variable that addresses a key behavior, you can crosstab that question with every other question to see how it impacts patient perceptions. For example, I discussed in a previous blog post about leader-rounding on patients (LRP). Since there was a question on the survey, we could show the difference in scores between those who recalled a leader-round and those that did not. This was not just on the Overall Rating question, but on all of the questions. This showed demonstrated importance and helped motivate leaders to leave their offices and hit the floors. It also pointed out opportunities to target LRP conversations. CMS added a Restfulness section to HCAHPS, and we could use LRP to focus attention on rest/recovery and then look to the data to see its impact. Note that this same logic can be used to display the importance of Staff Response or Doctor Knowledge of Medical History or any other question on the survey. Anything that people are focused on can be used to illustrate collateral impact on patient perception.
When leaders see data, they generally have two questions that boil down to (a) what happened, and (b) what is going to happen. Everything discussed to this point has focused on helping them understand what happened. At some point, though, the leaders want to know what impact the past will have on the future. Here, it is valuable to provide forecasting or predicting what will happen. This can run the gamut from very simple to complex but is another situation where you should not let the perfect become the enemy of the good. Even if you don’t have the statistical chops to perform complicated modeling, you can provide some simple insights with a couple of assumptions and an excel sheet.
The simplest forecasting is predicting likelihood of meeting the target based upon current performance. So, imagine, that your hospital has a goal to be at 83% top-box on Likelihood to Recommend by the end of the year. You are currently six months in and sitting at 79% top-box with 100 surveys collected. What needs to happen in order to get to goal? The obvious answer is “improve by 4%” but what does that really mean? Is it even possible?
- Let us assume that since we have received 100 surveys so far, we are likely going to get another 100 surveys in the final six months, for a total of 200 surveys for the year.
- By the end of the year, 83% of those 200 surveys must give us a top-box score. This means that (200 * 83%= 166) we need 166 of those patients to give us a top-box score to meet goal.
- We currently have (100*79%=79) 79 patients giving us the top-box response.
- So, we still need (166-79=87) 87 patients to give us a top-box response by the end of the year.
- Therefore, to meet goal, we need 87 top-box responses out of the expected 100 surveys in the next six months. This means that we need to perform at (87/100=87%) 87% for the remainder of the year.
- So, to improve by 4% by the end of the year, we really need to improve (87%-79%=8%) by 8% over the final six months.
Now, the only assumption we made here was that the future number of surveys would resemble the past. If we get twice as many surveys in the final six months (or half as many) it will throw off our calculation. But we have a decent estimate to use. Next we ask the question “what are the chances that we will have and sustain an 8% increase starting today and lasting for six months?” From here, you start looking at the data.
- Have we ever had a monthly score of 87% before? Have we ever had a score of 87% for six straight months?
- Is the data trending up over the past six months and does it appear to be improving quickly enough to meet that goal?
- Are the initiatives implemented likely to provide a quick improvement?
- Can they even be fully implemented with enough time to realize any improvement?
At this point, you can start ascribing probability here, from CERTAIN to LIKELY to POSSIBLE to UNLIKELY. You may also crunch the numbers and see that it is MATHMATICALLY IMPOSSIBLE to meet goal or in some cases close to the end of the fiscal year, MATHMATICALLY CERTAIN to meet goal. (For example, if you do the same calculations above, but set the hospital current performance at 63% and not 79%, you would discover that you would need 103 more patients to give you a top-box, which would be impossible given that the expectation is only for 100 more surveys.)
When I report performance, starting with the first month of a fiscal year, I include the adjusted target as well as the fiscal year target to help people familiarize themselves with the impact that past work has on future work. This also helped manage expectations if there was an especially good or bad month. It limited the popping of champagne or lopping of heads.
There are several other ways to insert forecasting into your data. In fact, as you start looking at increased granularity, you will probably see other places where you could predict outcomes based upon past performance.
- Will improvement at that one unit be enough to move the hospital to green?
- Would the plan to use more Nurse Practitioners to increase speed-of-patient-access help or hurt a clinic’s score?
- Would adding an additional triage nurse during high-volume times in the Emergency Department help scores?
The only limit is your creativity and resources.
I will end with one simple thing you can bring to the data regardless of your data skills. If you have ever been unlucky enough to be in a leader conversation where the data sparks a debate about whether the doctors are jerks, the nurses are grumpy, or the patients are just insane,4 you know that emotions can short-circuit reasonable conversations. Given that these conversations can generate more heat than light, sometimes it is hard to get anything productive done after that first insult is lobbed. To that end, even if you have no answers, it is incredibly powerful to simply ask a good question. You don’t need an answer to your question; it is designed to get people focused in more constructive ways.
I was presenting data at a hospital where the scores were pretty good, except for two units whose nursing scores were clearly lower than the rest. I did not know why this was but asked the group why this might be. When the CNO saw it, she said, “That doesn’t surprise me at all. Those units have the greatest nursing turnover and use of agency staff.” This, then, allowed for a thoughtful conversation about how to quickly and effectively train the organization’s core values and processes to a staff that turned over routinely. One good question kept the meeting out of the “Why do our nurses stink?” ditch and on the road to “Let us address these clear gaps” solutions.
Now, you might say, “But how did you know that was the real reason?” I didn’t and it doesn’t really matter. I wanted the senior leaders to own the data so they could see their role in moving the data. My primary goal was to move them off “Woe is me/you/us” powerlessness and in-fighting to a place where they could collaboratively work for a solution. Plus, does anyone think it is a BAD IDEA to make sure agency staff live the hospital values? Further, you solve that problem and you have likely accidentally solved a few others. Plus, even if you don’t know the answer and the senior leaders don’t know the answer, they will likely get you the resources to find that answer.
These past two essays have illustrated ways to improve everyone’s understanding of the data. If you see this as a way add value to you and your position, great! If you see it as way to stop spinning your service wheels by picking concrete targets and making tangible progress, even better! But my Machiavellian plan was to, in some small way, help you push your organization to be more data-literate. Making them better consumers of data makes them better prepared to focus on the quadruple aim of patient experience, clinical quality, employee well-being while being good fiscal stewards.
1 By the way, screaming down staff on service is never the right approach, but more on that some other time.
2This opens the entire topic of sampling. While it is important, I don’t want to get sidetracked here, so I will discuss that another time.
3It should be noted that some of this work requires knowing WHEN a patient was in the Emergency Department. You send that information to your vendor as part of the data used to survey your patients, so they should be able to get it back to you. NRC even has a utility on their website that allows you to download the survey data with those demographics already attached. Other vendors may as well. Worst case scenario is that you can simply get the data by month or day and group it yourself.
4Or lucky enough, depending on your interest in seeing a car crash into a dumpster fire.
Leave a comment