I have written a lot about how survey data is used in organizations, but I have not spent much time talking about the impetus or act of surveying.  To that end, I will start an occasional series on various aspects of surveying and survey construction.  There is no better place to start than at the beginning.  Every survey starts with someone deciding that a survey is the best answer to their research question. 

But the road to hell is paved with the best of surveying intentions.  I have seen a lot of data collected for no good purpose.  I have seen data collected that did not address the need.  I have seen data collected that was valuable for a window of time, but now the data collection just goes on because of inertia.  One of the biggest drags on PX resources, audience attention and patient goodwill are the zombie surveys and zombie reports that keep trudging slowly in search of brains.  The best way to prevent this is to not start this.  Once a survey is launched, either internally or with a vendor, it will likely generate just enough momentum and stakeholders to keep it going.  This essay, then, will address some things to consider when you are considering executing a survey.  Let us review with the five traditional journalistic questions.

WHY

Why do you want to run a survey?  This seems one of the most basic questions and yet one that often goes unasked.  Someone wakes up one morning and decides that they need to survey their patients/stakeholders/employees for some reason and it meets little opposition because we have been properly indoctrinated that measuring people’s opinions is an essential good that needs no justification.  To be clear, I think that every industry would be better if it spent more time listening to its employees and customers.  Too often, though, the data is not meant to understand the audience, but for some other purpose.  Reasons like: “I need a number” or “I need to fill a box” or “I need to validate a process.”  Usually, the leader woke up in the morning to an email from their boss demanding some metric to demonstrate productivity or compliance with an initiative.  Or the accrediting body wants to see some demonstration of being responsive to the needs of key stakeholders.  While I certainly understand that this motivation generally comes from outside the department, so it is difficult to resist, I also will say that this is one of the worst reasons to conduct a survey.  It is a hammer looking for a nail.  Surveys like this often get stood-up quickly with little review and in the best-case scenario will give a number to put in a box and allow someone to check a box, but it will not even be a pebble in the effort to change the course of a river.1  

My response to people in this situation is that there are plenty of numbers out there.  You could find a qualitative number.  [More on that later.]  You could gather data using other methodologies.  [More on that later as well.]  Even if you push back and say, “I need a QUANTITATIVE number.  Ideally, something with decimal places!” I will point out that there are plenty of numbers you can choose.  A vast majority of the boxes filled with numbers in any hospital come from numbers pulled from the electronic health record, incoming call disposition, accounts payable records, or any of a dozen other measures.  Most departments spend their whole corporate lives feeding numbers into the beast without having to photocopy one survey.

Notice that most of the things listed here are self-centered.  You are looking to validate YOUR new process.  You don’t care if anyone LIKES your new process.  Be honest, even if the survey is about a new billing process and one question is “Do you find this easier/better than the last process?” you don’t care what people say because you have already implemented the new process.  The best test here, is if your survey asks for an opinion on the service provided, and everyone says, “IT SUCKS!” are you going to change anything?  If you say, yes, well then go ahead with the survey.  But if you say no, then why are you bothering people for suggestions you won’t implement.

I have seen quite a few hospitals change the way they schedule patient encounters in the clinic, whether it is centralizing the system, introducing more phone trees, or even switching completely to on-line scheduling.  All this work gets done to streamline or cost-cut for efficiency.  After it is all done, someone says, “Hey, lets ask our patients and our doctors if they like it!” This should be a red-flag, since time and treasure went into building the new way and there is no appetite for additional changes.  If you cannot change it, don’t bother asking about it. 

“Oh, but Joe we might be able to make a few tweaks, if it is not popular.”  No, you can’t.  Ask the powers that be who built the new system, if changes can be made after the fact.  Go ahead.  I’ll wait.  My guess is that they said, “Well, if the changes are feasible, maybe, but the architecture is already built.  Put a ticket in and maybe we can look at in in the next fiscal year” which is corporate-speak for NO.  The only thing you can do after the fact is do a better job educating the patients about it, which, frankly, you should do already.  Do you really need a survey to tell you that change should be managed?  That is why there are people dedicated to Change Management. 

If the desire for surveying is to find out the targets feelings so you can manage behaviors, processes or perceptions, then survey-away!  But if it is just to document that you did something, there MUST be a better, simpler, more cost-effective way to meet that goal.

WHO

This question is not about the target audience as much as the owners of the survey.  Yes, you should be thoughtful about the target of the survey.  But that is something you have already thought of.2 What you haven’t probably thought about is how this survey is going to be designed or executed.  I will put a pin in the actual survey construction—the questions, the response scale, etc.—as that deserves more attention at another time.  I am focused on the ownership of the execution.  This includes:

  • Who is responsible for converting questions into a mail-out or electronic survey
  • Who is managing survey returns (as in who is doing the data-entry on all the surveys that come back?)
  • Who is validating the outreach program?
  • Who owns the discussion of survey questions or execution changes?
  • How are results distributed?

People often don’t realize either that this work needs to be done, or, they think that it won’t take much time.  I remember an outpatient lab system that wanted to do surveys, and they opted for mail-back surveys, given their concern over the technology divide of their elderly rural patient population.  Survey responses were sporadic since they did not establish a protocol for delivery outside if “everyone distribute to your patients.”  They still got about 50-100 every week, and they took about a minute to data-enter, a bit more when there were comments attached and there were ALWAYS comments attached.  While this does not seem particularly arduous, this still meant that someone had to spend at least an hour or two on data entry every week.  Since they did not establish this process at the beginning, trying to find someone who could manage this monotonous task every week was only slightly harder than trying to find someone to switch shifts with you, so you can have Christmas Eve and Christmas Day off. 

This, though, often pales in comparison to managing the results.  Again, people think that, in using an on-line survey there will be no need for any additional summarizing or analysis.  But, with the lab survey, the online host certainly summarized the data, but did not allow any crosstabs, so while they could see the overall scores, they could not easily see their individual outpatient lab scores without downloading the data and doing that themselves.  Again, this may not take a lot of time to build pivot tables in Excel from the data, but it takes time.  Since no one on that team had a dedicated resource, it took a champion to step up and do that work.  As the survey continued, adding trends became another layer of complexity that, again, is not hard in-itself, but requires some level of data skills as well as time.  Without the largess of that one volunteer, all this time would need to be accounted for, literally as well as figuratively. 

The reality is that this work puts home-brewed surveys in a no-mans-land.  Not enough work (and not sufficiently connected to the traditional book-of-work for the department) to justify an FTE.  But more work than can easily be done by someone working on the margins of their normal job.  Again, not a deal-breaker, but something that needs to be addressed in the formation process.


WHAT

“Do a survey” seems to be the default answer to the need for any data.  As referenced in WHY, just because you want data to validate process does not mean that you have to execute a survey.  There are plenty of ways to get information, even information from humans (aka “employees” and “patients”) other than sending them a link or a piece of paper. 

In fact, for many encounters, a survey is the worst way to gather data.  It lacks some key elements that may be useful for your research question.

  • It is impersonal.  While some surveys may benefit from anonymity,3 if you are concerned about building a relationship with the patient, letting them feel valued and important, you can do a lot better than an 8-question survey emailed to them.
  • It lacks detail and texture.  While many surveys have a “give us your thoughts” verbatim question, the numbers are the things that matter.  While some surveys may generate more verbatim responses than others, they are likely to be perfunctory “The care was great!” “Nurse Katie was so sweet!” etc.  If you want real personal feedback, you won’t get it from a survey.
  • It has no “choose your own adventure” ability.  The survey determines what variables are important and what items are rated.  While a well-designed survey should account for most important topics, there is always competition between those who want a SHORT survey and those who want a VARIETY of questions.  We have all seen responses where someone will give top-box ratings to every question but give a ‘7’ on the one question that matters.  Or you will see a patient give top scores to two nursing questions but give a bottom-box score to the third.  You want to know what happened, but you will never know.  Surveys will find what you want to know but are not good at finding the stuff you didn’t know you needed to know. 
  • They are delayed responses.  Whether the survey responses come 14-days after discharge, or 48-hours after discharge, you are officially too late to address any concerns referenced.  The reason why some people are so hot for in-the-moment surveys (an iPad survey at discharge) is that they want to know what is wrong while still having time to fix it.  The dirty little truth, though, is that EVERY survey is too late to address concerns.  A survey is evaluating an experience that has ended.4 It is a rearview mirror.  Even if it is over only a few seconds, it is still over.  Its only real value in survey data is in altering approaches for the next encounter.
  • It requires effort.  The activation energy to click a link and answer a few questions is still effort.  You can reduce the costs of taking a survey, but you can never make it ZERO.  This is why you are lucky if you get a 25% response rate.  Even at 35%, which is fantastic, you still have two-thirds of your patient population that could not be bothered.  If you are surveying a thousand patients, the volume makes up for the response rate.  But if you are seeking to survey a small subset of patients, or even your physicians, that small response rate can mean that you are talking about performance looking at, maybe, 25-45 responses.  If you squint, it might be acceptable, but once you want to break the data out by time-period, or clinic, etc., the data is meaningless.  My experience is that doctors have zero compunction about telling you that, to your face, in the most unflattering way possible.

For many feedback needs, a simple conversation can work well.  For example, if your food service leader is one of the people who do leader rounding in your hospital, they have a readymade opportunity to ask the question, “So how was the food?”  “Was the server polite?” etc.  It has the value of letting the patient know you care.  It has the value of allowing follow-up questions.  It has the value of taking the conversation where the patient wants to go—be it about food taste, food options, timeliness, courtesy, etc.  It really has the value of addressing any concerns or questions before the care is over.  Yes, it is retail and not wholesale—you must talk to patients one at a time—so it is less attractive as a strategy.  But, if you really want to know how your patients are experiencing things and if there is a information gap that you can close immediately to improve experiences, it is far superior to a survey.

All health care organizations were required to have a Patient Family Advisory Council by the end of 2025.  These are also great places to get feedback on various questions you have.  Getting agenda time on the next PFAC and burning an afternoon with them will likely be more fruitful than spending 10x the time trying to build and conduct a survey.  Even if you want to conduct a survey, talking with the PFAC will tell you what sorts of questions you should be asking.  So, before you automatically have the knee-jerk response of “SURVEY!” consider if this is really the mode that will get you the information you need.

WHERE

There are two questions here.  First, where are you going to execute the survey—at the point of care or as a mail-out/email/QR code?  Building upon what I have said before, your manner of execution can drive results.  The further from an event, the less clear it becomes.  So having them do a survey even a day or two past an event will change the event.  Personally, I have been livid at a consumer experience only to have that fade as I get further from it.  Conversely, I have had experiences that, only upon reflection, made me angry.  I have had positive experiences where I want to take the survey at the bottom of my receipt.  But it gets stuffed in my wallet or pocket and when I pull it out at the end of the day, I throw the wadded-up paper in the trash, not even remembering that there was a survey I wanted to take.  Maybe I am the only one who has these experiences, but I suspect that this explains a lot of people’s behaviors, given that I am not the only one who doesn’t take all the surveys they are offered.


The second question is where is the survey data housed?  This has less to do with security, as much as it does with access.  There are online vendors who will host your survey, but their free version may limit who can access the results.  Results that are downloaded and stored in a public folder for people to access can be prone to accidental or purposeful edits or deletion.  It is important to have some sense of custody for survey data, or even well-intentioned people will look at the wrong data or the right data through the wrong lens.

WHEN

This question, especially when combined with elements of the previous issues, can be the most detrimental to useful data, next to crappy survey questions.  Many home-brew surveys, like for lab or food service, involve so many people and often the same person multiple times, so they want an approach that doesn’t ask EVERYONE to fill out the survey EVERY TIME.  To that end, they need to create triggers for when the survey should be distributed.  Useful triggers have two qualities:

They need to be clear.  For food service, it might be “after a patient’s second meal.”  For lab, it might be “all lab events on Monday, from 11am to 1pm.”  Any trigger that is not clear tends to be overlooked or misused.  I worked with a lab doing a internal survey, and their request was that every lab distribute 100 surveys a week.  This was vague and had no trigger.  Or, rather, the trigger was on Wednesday afternoon when the director sent out an email to everyone asking how many surveys had been distributed so far.  This meant that a vast majority of surveys were distributed on Thursday and Friday. 

It needs to be enforced.  We cannot force a patient to take a survey, but we need to be clear about the expectations of survey delivery.  Survey vendors are very attentive to survey execution.  I have never worked for or with a vendor who could not provide me with an audit trail for all of my patients.  I would not expect this level of attention to detail for an internal survey, but too often I have seen people not honor the triggers for a survey and not suffer any consequences.  This is not “dressed in ashes and sackcloth” consequences, but a leader that continually fails to meet their triggers and expectations should be asked WHY they are unable to do what everyone else does.  If NOONE is meeting the expectations for execution, then, well, one should consider changing those expectations.  Either way, by not measuring the process, you are missing out on important information.

Without triggers, we are also opening the door to bias.  I have mentioned in a previous essay it is easy to ‘forget’ to give the survey because a patient is perceived to be angry and not likely to give us our desired scores.  I have a lot to say about efforts to sculpt the data, but I will save them for another day.  I will simply say that these efforts often fail because you don’t know what is in the heart of a patient and not giving patients a chance to speak, even vent, only means that you will force them to more public spaces for this expression, like Facebook or Google.

It bears repeating that I love data and I love surveying.  But surveying is a tool and tools that are perfect for some needs, are suboptimal for others and completely useless of still others.  My favorite coffee mug is perfect for hot beverages, adequate for tenderizing meat, and useless for straining pasta.  Using it when it is not your best tool, but because it is available or easy may not set you up for failure, but it certainly will make a task more complicated and less likely to be completed.  Worst of all, you are bothering people with no real value. 

1This assumes that the information you receive shows that you missed the mark with your work and need to course-correct.  This data will rarely be used to change the course of a river.

2Um, right?!?

3Most surveys are less anonymous than you might think.  This isn’t garden-variety paranoia, but every effort is made to link your data back to you.  Not in a punitive way, but to bring in additional data, like your gender, age, zip code, as well as any other useful information.  Call me crazy, but I would wager that the drug store that put a code at the bottom of your receipt is more interested in the opinions of people who bought pharmaceuticals or who spent a lot of money, rather than the opinions of a person who bought a candy bar and energy drink.

4Unless your plan is to review their responses while they are sitting there and quizzing them on their answers.  That seems, as the kids would say, very cringe.

Leave a comment