Felix Salmon je v Wired pripravil odličen članek o mejah uporabe statistike, matematike in računalniških algoritmov v realnem življenju. Finančno industrijo, športne stave, sestavljanje športnih ekip, marketinško ciljanje, vohunjenje,…, celo predvolilno kampanjo Baracka Obame leta 2012 so poganjali računalniški algoritmi, ki v velikih količinah podatkov (“big data”) iščejo vzorce in na tej osnovi generirajo zmagovalne strategije. Računalniški algoritmi se zdijo kot zmaga statistike nad vsakodnevnim obnašanjem ljudi. Edini problem pri tem je, da to ne drži
Lepota oziroma eleganca matematičnih modelov (denimo Lijeva Gaussian copula function, glej tudi izvrsten opis njenega delovanja v praksi v drugem članku Salmona “The Formula That Killed Wall Street” iz leta 2009) temelji na predpostavki, da preteklo obnašanje (ali pretekle korelacije med obnašanjem različnih pojavov) determinirajo njihovo prihodnje obnašanje. Pri tem pa temeljijo na normalni porazdelitvi dogodkov. Toda ekstremni pojavi, ki imajo opravka s človeško dejavnostjo oziroma obnašanjem, niso normalno porazdeljeni (to je konec koncev ugotovil že aktualni Nobelovec Eugene Fama leta 1965). Zato so denimo tako močno pogrnila Basel I pravila glede izračunavanja rizikov bančnega portfelja ali formule bonitetnih agencij za izračunavanje rizikov posamičnih strukturiranih finančnih produktov. To je eden izmed glavnih razlogov, zakaj smo tako “znanstveno” padli v tako veliko finančno godljo (o tem bom podrobneje pisal enkrat prihodnjič).
Problemov je še veliko več, toda ključno je, kot lepo pravi Salmon, da statistično predlaganim strategijam ne moremo zaupati in da so brez subjektivne presoje izjemno nevarne. Zato v meteorologiji denimo vse računalniške simulacije vremenskih napovedi subjektivno popravljajo na podlagi preteklih izkušenj in zdravega razuma – ter jih za 25% izboljšajo. Podobno je drugod. Tudi v šahu, kjer je število možnih kombinacij in strategij omejeno. Računalniški algoritmi so dobra pomagala, ne morejo pa biti edino merodajna.
Težko si predstavljam, da bi kdorkoli med nami zaupal navadno zdravniško operacijo računalniškemu robotu, pa čeprav bi tega prej “napolnili” s svetovno zakladnico podatkov o vseh podobnih operacijah doslej in z detaljnim skenom človeškega telesa. Le zakaj bi potem finance in ekonomijo prepustili matematičnim inženirjem ali matematično zdrilanim finančnikom in ekonomistom? Ker vedo nekaj “izračunati” (oceniti, aproksimirati na podlagi takih ali drugačnih predpostavk), še ne pomeni, da vedo tudi, kaj to res pomeni ali kaj utegne prinesti, če bi se tudi ostali obnašali po enakem algoritmu. Kaj šele, če bi se fundamenti, aproksimirani s predpostavkami, nenadoma spremenili.
The reason the quants win is that they’re almost always right—at least at first. They find numerical patterns or invent ingenious algorithms that increase profits or solve problems in ways that no amount of subjective experience can match. But what happens after the quants win is not always the data-driven paradise that they and their boosters expected. The more a field is run by a system, the more that system creates incentives for everyone (employees, customers, competitors) to change their behavior in perverse ways—providing more of whatever the system is designed to measure and produce, whether that actually creates any value or not. It’s a problem that can’t be solved until the quants learn a little bit from the old-fashioned ways of thinking they’ve displaced.
…
After disruption, though, there comes at least some version of stage three: overshoot. The most common problem is that all these new systems—metrics, algorithms, automated decisionmaking processes—result in humans gaming the system in rational but often unpredictable ways. Sociologist Donald T. Campbell noted this dynamic back in the ’70s, when he articulated what’s come to be known as Campbell’s law: “The more any quantitative social indicator is used for social decision-making,” he wrote, “the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
On a managerial level, once the quants come into an industry and disrupt it, they often don’t know when to stop. They tend not to have decades of institutional knowledge about the field in which they have found themselves. And once they’re empowered, quants tend to create systems that favor something pretty close to cheating. As soon as managers pick a numerical metric as a way to measure whether they’re achieving their desired outcome, everybody starts maximizing that metric rather than doing the rest of their job—just as Campbell’s law predicts.
…
It’s increasingly clear that for smart organizations, living by numbers alone simply won’t work. That’s why they arrive at stage four: synthesis—the practice of marrying quantitative insights with old-fashioned subjective experience. Nate Silver himself has written thoughtfully about examples of this in his book, The Signal and the Noise. He cites baseball, which in the post-Moneyball era adopted a “fusion approach” that leans on both statistics and scouting. Silver credits it with delivering the Boston Red Sox’s first World Series title in 86 years. Or consider weather forecasting: The National Weather Service employs meteorologists who, understanding the dynamics of weather systems, can improve forecasts by as much as 25 percent compared with computers alone. A similar synthesis holds in economic forecasting: Adding human judgment to statistical methods makes results roughly 15 percent more accurate. And it’s even true in chess: While the best computers can now easily beat the best humans, they can in turn be beaten by humans aided by computers.
In finance too we’re starting to see at least the outlines of a synthesis. In September 2010, the Basel Committee came out with Basel III, and while it doesn’t fully dismantle Basel II, it does add layers of common sense on top of all the rocket science. As well as raising the required capital ratio, it sets a leverage ratio (effectively a maximum size that a bank can grow to, given the amount of capital it has) and liquidity requirements that experienced bankers know create a cushion for the whole system. Essentially, while the algorithms were given free rein under Basel II, there’s a host of overrides in Basel III that put power back where it belongs, in the hands of experienced regulators. Basel III isn’t perfect, but no international system of bank regulation could ever hope to be. In a few years’ time, if and when it gets fully implemented, it’s going to be a vast improvement on what preceded it.
Vir: Felix Salmon, Wired