Ian Ayres in The Financial Times
How can it be that an incredibly stripped-down statistical model outpredicted legal experts with access to detailed information about the cases? Is this result just some statistical anomaly? Does it have to do with idiosyncrasies or the arrogance of the legal profession? The short answer is that Ruger’s test is representative of a much wider phenomenon. Since the 1950s, social scientists have been comparing the predictive accuracies of number crunchers and traditional experts – and finding that statistical models consistently outpredict experts. But now that revelation has become a revolution in which companies, investors and policymakers use analysis of huge datasets to discover empirical correlations between seemingly unrelated things. Want to hedge a large purchase of euros? Turns out you should sell a carefully balanced portfolio of 26 other stocks and commodities that might include some shares in Wal-Mart.
In Freakonomics, Steven D. Levitt and Stephen J. Dubner showed dozens of examples of how statistical analysis of databases can reveal the secret levers of causation. Yet Freakonomics didn’t talk much about the extent to which quick quantitative analysis of massive datasets – call it “super crunching” – is affecting real-world decisions. In fact, decision-makers in business and government are using statistical analysis to drive a wide variety of choices – and shunning the advice of traditional experts along the way.
Instead of simply throwing away the know-how of experts, wouldn’t it be better to combine super crunching and experiential knowledge? Can’t the two types of knowledge peacefully coexist? There is some evidence to support this possibility. Indeed, traditional experts are shown to make better decisions when they are provided with the results of statistical prediction.