## Too Much Signal

# The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t

By Nate Silver

The Penguin Press, 2012

“This is probably a good time to link to my book,” tweeted Nate Silver at 12:13 a.m. on election night, one hour after Ohio (and hence the election) was called for Barack Obama. As is generally the case with the evidence-minded statistician, he was completely right: *The Signal and the Noise* rocketed to No. 2 on Amazon shortly thereafter. After calling all 50 states and nearly all of the Senate races correctly – and enduring withering ad hominem attacks from Mitt Romney supporters in the weeks before the election – we can perhaps forgive him this rare demonstration of modest braggadocio.

Silver’s notoriety with the right wing and subsequent vindication and beatification by the left has bestowed a peculiar rock star status on someone whose field – statistics – has few rock stars. Such is his current level of fame, he has even acquired his own genre of internet meme, “drunk Nate Silver,” built on the Chuck Norris model of one-liners (*e.g.*, drunk Nate Silver is riding the subway, telling all of the passengers the date of their death). Now that Silver is famous and revered, and with some justification, one key question remains: how is that book?

The book is much like its author: thoughtful, very intelligent, but also deeply nerdy and so thorough as to raise the question of when that virtue has been turned into a fault. The theme, as one could glean from the subtitle, is the business of predictions – who makes predictions for a living, how do they do it, and which professions are better at it than others? Silver examines these questions with across a wide variety of fields, including: the financial crisis, political punditry, baseball, weather, earthquakes, predicting economic growth, epidemics, chess, poker, Google, and the stock market. These are not subjects mentioned in passing – nearly every one gets its own chapter.

Silver knows (or was told) that he tends to get down into the weeds, and so he gives us this warning in the introduction: “There is no denying that this is a detailed book – in part because that is often where the devil lies, and in part because my view is that a certain amount of immersion in a topic will provide disproportionately more insight than an executive summary.” It turns out that the warning is insufficient to the danger at hand.

I am a chess-playing baseball fan who watches politics like it’s his job and took a course on earthquakes in college. I *like* these subjects, and after reading an early description of the book I was salivating over its arrival. And yet, now that I’ve read it, it’s surprisingly not what I wanted. Silver out-nerds me in a way that I did not think was possible, turning each chapter into a slice of undergraduate textbook. Here he is talking about some of the complexity involved with reporting economic data:

Large errors [that cause post-hoc revisions of national economic data] have been fairly common. Between 1965 and 2009, the government’s initial estimates of quarterly GDP were eventually revised, on average by 1.7 points. That is the average change; the range of possible changes in each quarterly GDP is higher still, and the margin of error on the initial quarterly GDP estimate is plus or minus 4.3 percent. That means there’s a chance the economy will turn out to have been in recession even if the government had initially reported above-average growth, or vice-versa. The government first reported that the economy had grown 4.2 percent in the fourth quarter of 1977, for instance, but that figure was later revised to negative 0.1 percent.

I get that that is an important point, and it is difficult to highlight a preponderance of too much detail with one paragraph, but after a while examples from the fourth quarter of 1977 begin to lose their charm. I am undeniably better off for learning about the difficulty of pinning down GDP estimates, but I confess to having not enjoyed the journey.

The chapters delve so deeply into their subjects that at some point while reading each one you would believe that the entire book was on that subject alone. That said, it does not read like a series of independent essays rounded up to make a book – he has an overriding point, though it is not easily encapsulated. Well, that’s not totally fair. Here it is in its encapsulated form:

# xy

xy + z(1 – x)

Yes, as you can plainly see, Silver is a proponent of Bayes’s theorem. Oh? What is Bayes’s theorem? Well, I’m sorry you’ve asked. Bayes’s theorem, written in the mid-18th century, appears to be one side of a raging debate among statisticians about how to judge the veracity of statistical models. The other side of the debate is occupied by an English statistician named Fisher who, writing in the early 20th century, supports a “frequentist” set of statistical methods. How should you relate to this cutthroat feud? Probably the way you do to the xkcd comic embedded in this review.

*actually*having cancer have only gone up from 1.4% to 10%. Your intuition failed to account for just how low a rate 1.4% really is, and didn’t weigh those false positives heavily enough. So, thank you Thomas Bayes.

Where Bayes’s theorem really gets going is in analyzing the probability that something is true based on some salient fact. Silver provides this example: if you found a strange pair of underwear in your significant other’s underwear drawer, what is the probability that you are being cheated on? To solve this problem with Bayes’s theorem you need to estimate three variables, the most important of which is the probability you would have assigned to the possibility you were being cheated on *before* you found the underwear. Let’s say that probability (x) was low, around 4%. Next, what is the probability that you are not now being cheated on and that there’s a good reason for the strange underwear being there. Let’s make that low too and put it (z) at 5%. Now, what is the possibility that if you are being cheated on, you would expect to find underwear in the drawer? That’s a little more likely, so let’s put it (y) at 50%. Bayes’s theorem can now tell us that, in light of discovering the underwear, our new probability of being cheated on has gone up from 4% to 29%. That might seem a little low, but like the 1.4% base rate in the mammogram example, our “prior probabilities” are very influential.

Frequentism doesn’t eschew Bayes’s theorem outright, but it has serious problems working with prior probabilities. Frequentists want to eliminate researcher bias in statistical analysis and their focus is on sampling errors. If you want to know how voters in New Hampshire will vote, a frequentist would say the best way to find that out is to put the question to every single resident of New Hampshire. The more data you have, the closer you must be to “the truth”; the smaller your sample size the more likely that bias has crept into your efforts. Although the position is designed to remove bias, it has the unfortunate effect of preventing researchers from exploring real-world context as opposed to statistically significant correlations.

The bigger problem…is that the frequentist methods – in striving for immaculate statistical procedures that can’t be contaminated by the researcher’s bias – keep him hermetically sealed off from the real world. These methods discourage the researcher from considering the underlying context or plausibility of his hypothesis, something that the Bayesian method demands in the form of a prior probability. Thus, you will see apparently serious papers published on how toads can predict earthquakes, or how big box stores like Target beget racial hate groups, which apply frequentist tests to produce ‘statistically significant’ (but manifestly ridiculous) findings.

Bayes’s theorem is such a consistent winner in *The Signal and the Noise*, it’s hard to justify just how much time Silver spends establishing the dichotomy. It would have been an improvement for Silver to just ignore frequentism and stick to expounding on the usefulness of Bayes’s theorem, but then I didn’t realize the relevant literature was populated with earthquake-predicting toads. Given Silver’s strawman presentation of the opposition, it’s hard to imagine he needs to fight the good fight here. Maybe he does, but it feels out of place to mount his oh-so-detailed arguments in a book for the general public.

Nate Silver has held down some interesting jobs in his career and his first-person accounts of them are very good. He broke out of cubical-based drudgery by developing a formula for quantifying the value of baseball players just when Michael Lewis was writing *Moneyball*. As the demand for stat-heads began to rise in baseball, Silver was there to ride the wave. In 2006 the projection system Silver developed for Baseball Prospectus predicted that future Boston Red Sox second-baseman Dustin Pedroia would become one of the best players in baseball. Traditional scouts had dismissed him as “not physically gifted.” After Pedroia won Rookie of the Year in 2007 and American League MVP in 2008, Silver obtained a press credential to fill out his understanding of “what made Pedroia tick” and learned his answer the hard way:

Pedroia walked past me as I stood on the first-base side of the infield, just a couple of yards from the Red Sox’s dugout. The scouts were right about his stature: Pedroia is officially listed at five feet nine – my height if you’re rounding up – but I had a good two inches on him. They are also right about his decidedly non-athletic appearance. Balding at age twenty-five, Pedroia had as much hair on his chin as on his head, and a little paunch showed through his home whites. If you saw him on the street you might take him for a video rental clerk.

Pedroia turned to enter the dugout, where he sat all by himself. This seemed like the perfect time to catch him, so I mustered up my courage.

“Hey, Dustin, ya got a minute?”

Pedroia stared at me suspiciously for a couple of seconds, and then declared – in as condescending manner as possible, every syllable spaced out for emphasis: “No. I don’t. I’m trying to get ready for the big-league-baseball-game.”

I hung around the field for a few minutes trying to recover my dignity before ambling up to the press box to watch the game.

Unfortunately, for every one page of amusing run-in with Dustin Pedroia, there are two pages painfully recapitulating how logarithms work.

Silver also spent a few years wiping the floor with amateur (non-stat-minded) online poker players in the early period of the online poker boom, winning about $400,000 over three years. Once Congress shut down online poker, he turned his attention to politics, developing his blog FiveThirtyEight and a proprietary method of aggregating polling results. After wiping the floor with professional (but still non-stat-minded) political pundits with his 2008 election predictions, he “relaunched” the blog under a NYTimes.com domain and the rest is history.

Silver now wears the mantle of Math God the way Eddie Van Halen wore Guitar God in the 1980s; *The Signal and the Noise* is a triumph of nerdism. He is leading the van of nerds, though surprisingly he is a little too far out in front for my tastes. But that is just as well. Thanks to earthquake-predicting toads and political pundits untethered by “facts,” it is unfortunately necessary for someone as serious and thorough as Nate Silver to keep them at bay. In the week before the 2012 election, Silver’s prediction of Obama’s chances to win reelection was 91.1%. If Mitt Romney had won, we certainly would have seen the chattering class declare the result the end of science, math, and Nate Silver. But that’s not the way it works. I predict there is a 90% chance that this review will end on a bad prediction-based pun, but that’s the thing about predictions: if that 10% chance comes to pass it doesn’t mean you were wrong.

____

**Jeffrey Eaton** is a fundraiser, amateur photographer, and *Open Letters Monthly* editor-at-large. He lives in Washington, D.C.