There is no more disturbing four-letter word for many people than MATH. While people have varying opinions on the class subjects in high school and college, there are few that create a clearer dichotomy—either you tolerated, survived, despised math, or, well, sometimes it feels like there is no other side. Most math nerds kept their nerdiness well-hidden, either because they did not want to suffer the slings and arrows, or because they didn’t want to help all their friends with their algebra homework.
This has created a belief that math nerds are different and separate from the rest of humanity. The nerds “just get it” and the rest “just don’t.” This antipathy is less prevalent in other school subjects. People usually will couch their dislike of history or English with a personal preference. “I hated World History, but I really liked the class I took on the Civil War…” “I couldn’t finish Jane Eyre, but I liked Little Women and some of the Shakespeare plays…” But no one in human history has utter the sentence “I hated all my math classes, but that quadratic equation kept me on the edge of my seat!” Perhaps my brothers and sisters in philosophy can relate, as that subject also can get a lot of yawns and eye-rolls.1 It seems that beyond a lack in interest in math, claiming to know nothing about it is a badge of honor among some. So much so that I feel like I must put a caveat on this essay, telling the audience that it is not an essay about math, so much as it is an essay about how math can foster miscommunication between clinicians and patients.
Math is one of those in-or-out topics, as in, you either get it or you don’t. This is predicated upon a belief that math is a special skill that you either understand or don’t. I don’t often hear someone say, “I don’t have a mind for words” but I often hear people say, “I don’t have a mind for numbers.” Clearly society accepts and promulgates this belief. The trick is, though, that while there are child prodigies in many fields, including math, most people who are good at math are good at it because they work hard to get good at it. It didn’t come easy for them; they had to go get it, usually by force. I used to say that I loved math, but math did not love me. Now I just say that math and I have an uneasy alliance.
Perhaps it is how it was taught, but this lack of understanding of core tenets has led people to not feel comfortable in any math conversation, even one based upon broad concepts, like percentages or likelihoods.2 This chasm between those who get it and those who don’t, though, can cause significant problems in life and in healthcare. I have delivered several presentations at grand rounds3 on how to discuss math with patients. This is not because doctors don’t understand statistics,4 but because they worked hard at processing the math concepts in their academic studies or in journal articles and sometimes assume some concepts as foundation that many patients just don’t have.
Simply put, clinicians often don’t know how a patient doesn’t understand something, so they cannot present the math in a way that a patient can grasp. Certainly, there are concepts that are complicated. But most of the gap is how people will define things in ‘real life’ against how they are defined in science. There are also concepts simply don’t make intuitive sense. This already feels like a series of essays, so here I will focus on how percentages confuse patients and cause them to overreact—either in a rush to do something, or as a resistance to do something.
False positives
Not much can inspire a patient to panic and rush to treatment more than a positive test result and this is understandable. People are taught that timely intervention is the best course for beating a disease. But those that read my essay on Type 1 and Type 2 error know that every test carries with it some amount of error. What most patients don’t know is that this applies to medical testing as well. So, tests designed to measure the presence of a specific disease are never going to be 100% perfect. They will have false positive results and false negative results. That is, there are no tests that are never wrong. OK, genetic tests, looking for chromosomal abnormalities or disorders tied to a single-gene can be considered 100%, but for the most part genetic testing identifies a trait that is linked to an increased likelihood or is associated with the manifestation of a disease. These genetic tests speak to the possibility of cancer are not likely to say anything with 100% confidence. For patients, though, these subtle differences between presence, possible presence, or likelihood of a disease are lost. Since these tests generate anxiety, many sleepless nights before and after, and lead to life-changing decisions, the implications are more dramatic in a patient’s mind than the margin of error associated with survey results. If the test says YES, the patient assumes that they have the disease. Given the infrequency of many diseases, though, a positive test is far more likely a false positive than real evidence of a disease. I know this seems counter-intuitive, so let us examine an example.
Here is a simple example. Imagine that there is a disease that strikes 1 in 1,000 people. Imagine there is a test for this disease that can predict with a 10% false positive (says the disease is present when it is not), and a 5% false negative (says the disease is NOT present when it really is.)5 We test 100,000 people. Here are the likely results of this testing.
- We would assume that 100 of these people will be positive for the disease, since the disease strikes 1 in 1,000 people.
- Since the test has a 5% false negative, we would assume that 5 of the 100 people who have the disease would receive news that they do not have it. The remaining 95 people would test positive.
- Since there is a 10% false positive, we would assume that of the 99,900 people who do not have the disease, 9,990 of them would receive news that they tested positive.
- So, a total of (95 + 9,990) 10,085 people tested positive. Of these (9,990 / 10,085) 99.06% of them are not positive for the disease.
- Put in terms of odds, the chance of a patient having the disease is 1-in-1000. If they test positive, their chance of having the disease increases to 1-in-106. So, their chances increase, to be sure, but there is still a 99% chance that the positive test result is simply an error.
Now clinicians will quickly point out that these tests are not just randomly given out. A patient’s medical history or symptoms will lead to the execution of a test. This is true. But a test’s positive predictive value is driven more by the prevalence of a disease in the population than its accuracy. That is, the rarer the disease, the more likely a positive result is an error, even if the test is highly accurate.6
Moreover, from the patient’s perspective, they do not see themselves as one of 100,000 who are taking this test. They see themselves as an N of 1. Percentages don’t matter to patients. They do.7 If you have no great concern over the short-term anxiety of a patient with a false positive, perhaps you will have some for the five patients walking around quite relieved that they tested negative, even though they are a ticking time bomb.

I know, I know, you were led to believe there would be no math. The point of this example is NOT to say tests are unreliable. It is to illustrate how a clinician will look at a test result as compared to how a patient will look at it. When a doctor can manage that conversation by trying to explain what this test result means (to varying degrees of success), in modern healthcare a patient can see that test result in their patient portal and have already rewritten their will and said their goodbyes before the doctor can even review it.
For a doctor, a positive designation is not a certainty. It is the start of a conversation, with some other testing, work-ups, or monitoring. One Prostate-Specific Antigen (PSA) test out-of-range does not sound the alarm bells for treatment. It raises a flag for further testing and monitoring. But in the patient’s eyes, without proper understanding of the math behind the test, it can lead to a series of suboptimal decisions.
Side Effects
On the other side of the percentage conversation is how patients process the likelihood of side effects from medications. There are many who blithely skip over their doctor’s description of potential side effects. After all, the doctor wouldn’t prescribe it if they thought it would cause more harm than good. But with many things in healthcare, patients are inundated with side effects from TV ads or the sheets printed off in two-point font attached to their paper bag of medicines or the click-through acknowledgements in their patient portal. Here are three ways that patients can get confused about the likelihood of side effects.
- Patients tend to group them, assuming if they get ONE side effect, they will get them all. Instead of simply understanding that each of these is a possibility in isolation, they see them as uninvited family members all coming over for Thanksgiving. If a patient thinks it is an all-or-nothing proposition, they may be hesitant to take a medication since one benefit against a dozen side-effects does not seem like a good deal.
- Along with this, patients will often think that is likely to happen and not simply possible to happen. Some side effects are more likely than others, and some carry greater downsides than others. But, especially with TV advertisements, patients will hear the long list of possible side effects and do not know what of this is worth worrying about. The FDA required the reporting of major risks regardless of how frequently they occur. So, dizziness, blurry vision, and infection of the perineum are all listed, even though they are all not likely to occur, let alone equally likely to occur. This can be especially confusing when these risks are combined with possibly more likely, if less disturbing side effects, like nausea.
- Patients can get confused when they hear the probability of side effects. They are told that a drug that they take daily carries with it a 30% chance for weight gain. But they don’t know if that is a one-time 30% chance (as in 30% of the people who took the drug experienced weight gain), or if it is a cumulative percentage (every day I take the drug, I run a 30% of gaining weight, meaning over three days, I am definitely gaining weight.) This may seem silly, but this can lead to serious consequences. A study found that many men in Australia did not take their SSRI’s consistently because they misunderstood the 30% chance of sexual dysfunction. Instead of thinking that 30% of men who take the medication experienced sexual dysfunction, they thought it was a sexual lottery, where on any given day, there was a 30% chance that they could not perform. This meant that they would not take their pills on Friday, Saturday, or on any day that they wanted to maximize their ability to close the deal. Anyone who knows how SSRI’s are supposed to work would see this as a very dangerous way to take the medication.
So false positives can have people overreact in one direction and an error in understanding the probability of side effects can lead people to overreact in another direction. While I am sure no one reading this would ever make such mistakes, it is clear how many people could and see how making these mistakes without having a trusted clinician who can explain away these concerns is critical.
1I graduated undergraduate with a major in political science and co-majors in philosophy and computer science. I often joked that if I could have gotten a co-major in diagraming sentences, I could have cornered the market on everything everyone hated.
2I am not talking about how to calculate them, simply understand them, like what it means to be an 80% free-throw shooter in basketball, or a 90% chance of rain tomorrow.
3Of course some doctors DON’T understand math either or will confuse the statistics associated with biomedical research with the statistics associated with social science research.
4OK, to be clear, genetic testing can be spot-on when looking for a single-gene disorder or chromosomal abnormalities. I can never say never to anything, except, apparently, I can say always to the sentiment that I can never say never.
5I am choosing numbers here to help make the math in the example simple. You can search the internet, though, and find that these numbers are reasonable, if not perhaps a bit conservative.
6Forgive me a brief example of this. If a disease is present only in 1-in-a-million, and the test is 99% accurate, we would still have 10,000 false positives and 1 actual positive in a million tests.
7 This is a concept we will return to many times in this series of essays.
Leave a comment