Former Republican congressman-turned-TV pundit Joe Scarborough doesn’t buy Nate Silver’s numbers. For Scarborough, they just don’t add up.
Speaking on Morning Joe on Oct. 29, when Silver’s FiveThirtyEight blog put Obama’s chances at reelection somewhere around 75%, Scarborough declared: “Both sides understand [the presidential election] is close, it could go either way, and anybody that thinks this race is anything but a tossup right now is such an ideologue they should be kept away from typewriters, computers, laptops and microphones for the next ten days because they’re jokes.”
Let’s just forget about the First Amendment implications of this statement for now, as we’ll give the congressman a pass for political hyperbole. But the real problem for Scarborough and others that say Silver’s (supposedly) liberal politics is the real force behind his so-called “prediction” is that Silver isn’t making a prediction at all.
Rather, Silver is placing odds on the outcome of the presidential race based on extensive analysis of multiple data sources. When Silver’s blog declares that Governor Romney has a 13.7% chance of winning tomorrow’s election (as it does as of this writing), that doesn’t translate into Silver saying President Obama’s reelection is a lock. It simply means that based on Silver’s predictive model, if this election were held 100 times, President Obama would win, on average, 86 times to Governor Romeny’s 14.
It’s also important to remember that Silver himself doesn’t claim his algorithms are infallible and I have no doubt that if Governor Romney wins tomorrow’s election he (Silver) will be the first to investigate which influencing factors his analysis may have missed.
Get Ready for Resistance
The Scarborough/Silver flap illustrates two important lessons for business analysts and LOB managers as they seek to spread the gospel of Big Data and predictive analytics in the enterprise. First, be prepared to meet fierce resistance from colleagues that don’t understand and/or are threatened by data-driven decision-making. Many will point out instances where your “predictions” were wrong in the past and some, like Scarborough to Silver, will claim you’re biased (and maybe even call you names.)
To overcome this resistance, you’ll need to do some explaining about what predictive analytics is and isn’t. Predictive analytics actually isn’t about predictions. It’s about making better business decisions based on the likely outcomes of those decisions as influenced by past and current market conditions. Nobody, or at least nobody that understands the discipline, claims that predictive analysts are all-seeing, infallible fortune tellers.
Engage and Show Your Work
Which leads us to lesson two. Always be prepared to backup your data-driven analysis by showing your work (to borrow a phrase from my school days) and become your own biggest critic. After putting in days or weeks of number crunching to answer a particularly vexing question, you’ll be tempted to want to show off your new insight as soon as possible. But resist the urge. Instead, question everything. Question the assumptions you made in building your models. Question the quality of the data sources you performed your analysis on. Think like your critical colleagues, tearing apart your analysis at every possible loose edge until you are confident in your conclusions.
Then don’t just tolerate colleagues that question your analysis, but encourage them. Ask them to poke as many holes in it as possible. There’s a good chance they might point out flaws in your models and assumptions that you missed that, when corrected, make your analysis even stronger. Disarm critics by making them part of the process.
By explaining the true purpose of Big Data & predictive analytics and making internal critics part of the process, you have am much better chance of successfully incorporating data-driven decision-making into your enterprise and you may even stop all the name calling.