When I started the essay on home-brewed surveys, I honestly did not think I would break it out into three-and-counting surveys. But here we are. In this essay, I will tackle the topic of questions and their response scales. As I get deeper into these essays, I continue to acknowledge that there are college classes, books, videos, and even other blogs that will spend even more time and attention than I do on these various elements of surveying. Again, I stress that my objective here is less a step-by-step guide on how to build questions, but to call out pitfalls. Since I am writing this for an audience whose “real job” is NOT creating and running surveys, it seeks to focus some of the things you might not think are important but absolutely are. In this case, everyone knows a survey needs questions but may not recognize what makes a question good or bad, useful or useless.
The first thing to call out here, is that my focus will be on questions designed to rate things.
Quantitative versus Qualitative
Setting aside questions used to define a respondent, like zip code, insurance company, gender, race or income, there are two basic types of questions you can ask.
- Quantitative: Did something happen? Did the nurse introduce themselves? Did the doctor use hand sanitizer before touching you? Did the lab tech label your sample in your presence? They are still based upon a respondent’s ability to recall, but they are concerned exclusively with whether something happened or did not happen. They often have scales of Yes/No/DK or, if it covers a lot of staff (Did everyone use hand sanitizer before touching you?) may be Always/Usually/Sometimes/Never.
- Qualitative: How did that make you feel? Was the nurse courteous? Was the doctor caring? Was the care team sensitive to your needs? These questions are not focused on whether something specific happened, so much as they are focused on the emotional perceptions of that event. They tend to have scales like Excellent/Very Good/Good/Fair/Poor or Very Good/Good/Poor/Very Poor. Notice that they do not focus on if it happened, but how well it connected with the respondent.
Both questions have value, but both also assume a perspective. The former treats the action as the important element and treats all execution as equivalent. So, a nurse who says, “Hi! My name is Katie and I will be your nurse today!” and a nurse who mumbles “ImNurseJoeandIhavebennassignedtonyoutoday…” both would get a YES to the quantitative question “Did the nurse introduce themselves,” even though we can see that both are not equally engaging. On the other hand, the qualitative question generally frames the experience solely through the patients’ eyes. It is less simple documentation of an event as it is an evaluation of the event. Afterall, “courteous” or “caring” or “providing useful explanations” are clearly in the eyes of the beholder. As I have discussed before, one person’s “professional” is another person’s “rude.”
I tend to think of these questions as framing topics through my eyes versus your eyes. When I ask quantitative questions, I am defining what the important elements of an experience are and simply asking if we delivered them. It is important that the staff introduce themselves, verify your name and birthdate, and use proper hand-hygiene, so I simply want to know if they did these things. Qualitative questions swing the pendulum to the other side, by allowing the patient complete control over framing what courtesy looks like. Here, a patient may give higher marks to a nurse who did not introduce themselves, but did everything else to reduce anxiety and build a relationship.
Which of these types is more important to you is driven by what you want to do with the data once it is collected. If you want to check a box—verify for an external agency that you are doing what you are supposed to be doing or verify that an action plan is being operationalized—then quantitative questions are much better. If you want to use the data to build a better experience, then you want to understand if what we are doing is connecting with patients. My doctors may be caring, but are my patients seeing that effort? This is the value of qualitative questions.
Broad versus detailed questions
One of the biggest challenges with asking questions, especially in an environment where there is no opportunity to engage in a back-and-forth with the respondent, is creating a question that asks what you want it to ask. On one hand, you can ask a broad question, casting the net wide and allowing the respondent to define how they want to answer it. For example, you might as a patient “Overall, how would you rate the Safety at Clinic X?” It is simple and direct. It also leaves the definition of “safety” up to the respondent. It might mean:
- Use of clinical best-practices
- Use of hand sanitizer and masks
- Current state of the physical space
- Protection of personal health information
- The social distance available in the waiting room
- How dangerous the neighborhood surrounding the clinic is
- As well as any other possible definition.
The advantage to asking a broad question is that it is simple, easy to understand, and provides an overall snapshot of the topic. It gives the patient control over how they want to prioritize their definition of “safety.” But, Joe, you say, them being allowed to define it how they want to define it means that I have no idea what they are thinking of when they answer the question. That is true. If a majority of patients give you low marks on safety, then you will have to figure out why they felt unsafe.1 If you want readymade action-plan targets based upon this data, well, this highlights the main disadvantage to asking a broad question. It will highlight an area of opportunity, but it will not necessarily give you concrete direction.
To avoid this, some will want to create detailed questions. Instead of an overall question (or, perhaps in addition to an overall question), you could ask:
- Please rate your feelings on safety associated with these elements
- Use of clinical best-practices
- Use of hand sanitizer and masks
- Current state of the physical space
- Protection of personal health information (PHI)
- The social distance available in the waiting room
- How dangerous the neighborhood surrounding the clinic is
You have now created a question that specifically asks about concrete concepts. The advantage is that if a question reveals an opportunity, you have a clearer direction to move. These specific questions come with three distinct disadvantages though. First, when you start down this rabbit hole, it is hard to stop. If patients give you poor scores on PHI, your first question might be, “But why?” or “Where did this happen?” or “Who is responsible?” So, by asking a question with more specificity, you may still have additional concerns or follow-up areas.
Second, to ask specific questions, but keep the survey short, there is a tendency to combine various elements into a single question. Asking “Did you find the parking lot and office space safe?” is actually asking two questions: one about the parking lot and one about the clinic itself. What if the patient thought that the clinic was fine, but the parking lot was sketchy? They are left in a bind, and will either provide a rating for one element, or try to create an answer which is an amalgam of both. These double-barreled questions confuse patients and make the results difficult to parse.
Third, when you start writing very specific questions, you limit the survey to only topics YOU think are important. By asking more specific questions about patient safety, you are defining the ways in which safety can be an issue. This is fine, provided you include a question for the specific element that your patients are also focused on. Otherwise, your preconception will miss useful information. You may focus on a bunch of safety concerns inside the clinic and miss the fact that patients don’t feel safe in the neighborhood where the clinic is located.2
The one thing you may notice with the descent into specificity is that you are ballooning your survey’s question count. Here is where those who want clarity and those who want a short survey will butt heads. My recommendation is that with qualitative questions, one should lean towards generality and with quantitative questions, one should lean towards specificity. If you want to validate proper procedure, a series of Yes/No questions that walk through key elements of that procedure is better. If you want to gauge how well patients view the service, then broad qualitative questions are better.
Response Categories
Equally important to the questions are the responses you provide to answer the questions. All response scales should have two qualities to be effective. They must be complete and unique. Being complete means having an answer for all possible options. If a survey ran a series of qualitative questions where the scale was (a) Perfect, or (b) Horrible, one is forcing patients to pick an extreme when their actual preference might have been GOOD, or GREAT, or BELOW AVERAGE. Some will try to account for this by adding an OTHER category, as a catch-all. While this does provide an option for all answers, remember that you will have to process all these responses separately.
On the other extreme, the scale might be Terrific/Awesome/Top-Notch/Pretty Good/Above Average, etc. Here one sees multiple responses that are percieved as equivalent to the respondent. Without having each response possess unique qualities that separate it from the others, similar sentiments will be diluted because they will fall into multiple responses. There will always be a respondent who still feels the need to give a ‘7.5’ to a 0-to-10 scale, so no set of responses will be perfectly complete and unique. But as I have said before, we cannot let the absence of perfection prevent us from striving for the best possible.5
For the moment, my suggestion is that you not try and reinvent the wheel and go with the same qualitative scales you see in any vendor-driven survey.
In the end, the question that needs to be forefront in your mind as you build out your survey questions is this. Who is your survey for? If it is for you because you want to track specific data points across time or to check a box for your accreditation, then structure the survey to address those specific needs. There is no reason to include questions that you don’t care about or don’t feed your primary goal for this survey. If it is for your patients, then you need to keep the questions varied and broad, so that you can capture the things that are important to your patients, even if they are not what you think is important to them or immediately under your control.
1This is related to how to use survey data, which I will address in another essay.
1You might be thinking, “But I cannot control THAT!” That is an essay on its own, but the short answer is YES, YOU CAN. You can focus on the perception of safety. You could put a security guard in the parking lot, put up cameras or improve the lighting in public spaces.
3For those who need clarification here. Imagine you want to know how your nursing scores impact your overall hospital scores. If half of your patients give the nurses a top-box score and half do not, you can pull the data apart and look at how those who love the nurses differ from those who do not. But if your nursing scores are at 100% top-box you cannot look at its impact, because you have nothing to compare those 100% with. Without variation, you cannot have analysis.
4Though in truth, the fact that there is no variation probably indicates that you have a crappy question or a crappy response list.
5Now you can avoid this, by making your survey comprised of all open-ended questions where patients can write whatever they want. But you have just created other problems when it comes to analyzing the data, which we will discuss another time.
Leave a comment