The Election Lesson Learned is to be Healthily Skeptical of Analytics
The Election Lesson Learned is to be Healthily Skeptical of Analytics
By Mr. HIStalk
It was a divisive, ugly election more appropriate to a third-world country than the US, but maybe we can all have a Kumbaya-singing moment of unity in agreeing on just one thing – the highly paid and highly regarded pollsters and pundits had no idea what they were talking about. They weren’t any smarter than your brother-in-law whose political beliefs get simpler and louder after one beer too many. The analytics emperors, as we now know, had no clothes.
The experts told us that Donald Trump was not only going to get blown out, but he also would drag the down-ballot candidates with him and most likely destroy the Republican party. Hillary Clinton’s team of quant geeks had it all figured out, telling her to skip campaigning in sure-win states like Wisconsin and instead focus her energy on the swing states. The TV talking heads simultaneously parroted that Clinton had a zillion “pathways to 270” while Trump had just one, an impossible long shot. The actual voting results would be anticlimactic, no more necessary to watch than a football game involving a 28-point underdog.
The (previously) respected poll site 538 pegged Trump’s chances at 28 percent as the polls began to close. Within a handful of hours, they gave him an 84 percent chance of winning. Presumably by Wednesday morning their finely tuned analytics apparatus took into account that Clinton had conceded and raised his chances a bit more, plus or minus their sampling error.
This morning, President-Elect Trump is packing up for the White House and the Republicans still control the Senate. Meanwhile, political pollsters and statisticians are anxiously expunging their election-related activities from their resumes. They had one job to do and they failed spectacularly. Or perhaps more accurately, their faulty analytics were misinterpreted as reality by people who should have known better.
Apparently we didn’t learn anything from the Scottish referendum or Brexit voting. Toddling off to bed early in a statistics-comforted slumber can cause a rude next-day awakening. Those darned humans keep messing up otherwise impressive statistics-powered predictions.
We talk a lot in healthcare about analytics. Being scientists, we’re confident that we can predict and maybe even control the behavior of humans (patients, plan members, and providers) with medical history questionnaires, clinical studies, satisfaction surveys, and carefully constricted insurance risk pools. But the election provides some lessons learned about analytics-powered assumptions.
- It’s risky to apply even rigorous statistical methods to the inherently unpredictable behavior of free-will humans.
- Analytics can reduce a maddeningly complex situation into something that is more understandable even when it’s dead wrong.
- Surveyors and statisticians are often encouraged to deliver conclusions that are loftier than the available data supports. We humans like to please people, especially those paying us, and sometimes that means not speaking up even when we should. “I don’t know” is not only a valid conclusion, but often the correct one.
- Be wary of smoke-blowing pundits who suggest that they possess extra-special insight and expertise that allow them to draw lofty conclusions from a limited set of data that was assembled quickly and inexpensively.
- Sometimes going with your gut works better than developing a numbers-focused strategy, like it did for Donald Trump and for doctors who treat the patient rather than their ICD-10 code or or lab result.
- Confirmation bias is inevitable in research, where new evidence can be seen as proving what the researcher already believes. The most dangerous bias is the subconscious one since it can’t be statistically weeded out.
- A study’s design and its definition of a representative sample already contains some degree of uncertainty and bias.
- Sampling errors have a tremendous impact. We don’t know how many “hidden voters” the pollsters missed. We don’t know how well they selected their tiny sampling of Americans, each of whom represented thousands of us who weren’t surveyed. Not very, apparently.
- Response rates and method of outreach matter. Choosing respondents by landline, cell phone, email, or regular mail and even choosing when to contact them will skew the results in unknown ways. Most importantly, a majority of people refuse to participate entirely, making it likely whatever cohort they are part of leaves them unrepresented in the results.
- You can’t necessarily believe what poll respondents or patients tell you since they often subconsciously say what they think the pollster or society wants to hear. The people who vowed that they were voting for Clinton might also claim that they only watch PBS and on their doctor’s social history questionnaire declare their unfamiliarity with alcohol, drugs, domestic violence, and risky sexual behaviors.
- Not everybody who is surveyed shows up, and not everybody who shows up was surveyed. It’s the same problem as waiting to see who actually visits a medical practice or ED. Delivering good medical services does not necessarily mean effectively managing a population.
- Prediction is best compared with performance in fine-tuning assumptions. The experts saw a few states go against their predictions early Tuesday evening, and at that moment but too late, applied that newfound knowledge to create better predictions. Real-time analytics deliver better results, and even an incompetent meteorologist can predict a hurricane’s landfall right before it hits.
It’s tempting to hang our healthcare hat on piles of computers running analytics, artificial intelligence, and other binary systems that attempt to dispassionately impose comforting order on the cacophony of human behavior. It’s not so much that it can’t work, it’s that we shouldn’t become complacent about the accuracy and validity of what the computers and their handlers are telling us. We are often individually and collectively as predictable as the analytics experts tell us, but sometimes we’re not.






"A simple search on the named authors (when presented) reveals another carefully concealed attempt at Epic influence..." The site is…