Nenehne uganke v makroekonomiji ali zgolj zmeda v glavah teoretikov?

Narayana Kocherlakota, bivši predsednik Fed iz Minnesote, je zadel žebljico na glavico s tem, ko se je lotil t.i. “ugank” v makroekonomiji. Makroekonomisti poročajo o vedno novih “ugankah” – najbolj znanih je tistih 6 največjih ugank Obstfelda & Rogoffa (2000), danes sta največji uganki, zakaj QE ne sproži okrevanja in hiperinflacije ter zakaj se je rast produktivnosti kljub IT revoluciji zaustavila. Pri tem je za uganko proglašeno empirično odkritje, ki je v nasprotju s prevladujočo teorijo.

Kocherlakota iskrivo priznava, da so te “uganke” bolj znak zmede oziroma napačnega dizajna makroekonomske teorije. Makroekonomisti so si po zaslugi Lucasove neoklasične kontrarevolucije v sredi 1970-ih zadali neverjetno restriktiven okvir razmišljanja. Makroekonomski modeli morajo biti mikrofundirani, torej temeljiti na povsem arbitrarnih predpostavkah (racionalna pričakovanja) in (zelo restriktivno dizajniranem) optimiziranju potrošnikov in gospodinjstev, ki pa niso bile utemeljene na dejanskih empiričnih povezavah, pač pa so povsem subjektivno in ideološko zastavljene. No, in ko raziskovalci na podlagi empirične analize pridejo do diametralno nasprotnih ugotovitev, kot pa sledijo iz teorije, razglasijo nove ugotovitve kot “uganke”, namesto, da bi podvomili v prevladujočo teorijo.

Problem je v bedasti skrajno rekurzivni post-Lucasovi teoriji, ki makroekonomiste že dobrih 40 let sili, da s slikanico pojasnjujejo realni svet. Oziroma da se z otroško risbico poskušajo znajti v kompleksni konfiguraciji sodobnega sveta. In za te slikanice (sicer zapisane v matematične formule) so zadnjih 20 let kot po tekočem traku podeljevali Nobelove nagrade. Skrajni čas je, da gremo od kindergarten teorije nazaj k podatkom, k realnemu svetu.

Academic macroeconomics is about solving a seemingly never-ending series of puzzles, as model after model fails to provide a good approximation to the data. And these are often only limited slices of the available data. Very few macroeconomists try to look at, say, inflation swap options to make sure that the model is providing a good guide to market participants’ assessments of inflation tail risks.

But the prevalence of these puzzles is actually more than a little puzzling. Here’s what I mean.

To an outsider or newcomer, macroeconomics would seem like a field that is haunted by its lack of data, especially good clean experimental data. In the absence of that data, it would seem like we would be hard put to distinguish among a host of theories with distinct policy recommendations. So, to the novice, it would seem like macroeconomists should be plagued by underidentification or partial identification.

But, in fact, expert macroeconomists know that the field is actually plagued by failures to fit the data – that is, by overidentification.

Why is the novice so wrong?

The answer is the role of a priori restrictions in macroeconomic theory. Macroeconomists use a body of theory that imposes a number of a priori parametric restrictions on households and businesses. Households maximize expected discounted utility flows, with utility functions that are required to lie in a narrowly specified class. Businesses maximize profits and engage in monopolistic competition (or perfect competition).   Everyone updates their beliefs according to a priori specified rules of some kind (usually Bayesian updating).

The mistake that the novice made is to think that the macroeconomist would rely on data alone to build up his/her theory or model. The expert knows how to build up theory from a priori restrictions that are accepted by a large number of scholars. (Indeed, in the academe, that’s exactly what it means to be an expert macroeconomist.) Those restrictions are what give the models their empirical content.   As it turns out, the resulting models actually end up with too much content – hence, the seemingly never-ending parade of puzzles.

My tone is more than a little mocking – but I actually think that this approach to macro made a lot of sense in the mid-1970s or early 1980s. Data and estimation were both much more expensive than they are today. It was (I think) reasonable to substitute relatively cheap theory-driven restrictions for relatively expensive data-driven information. (Although it’s a little disturbing how little empirical work underlies some of those agreed-upon theory-driven restrictions – see p. 711 of Lucas (JMCB, 1980) for a highly influential example of what I mean.)

But it’s 2016, not 1976. I think that it’s time for us to rely a lot less on theory and a lot more on data when we come to addressing questions. If we make that switch, we won’t be confronting puzzle after puzzle anymore. Instead, we’ll have to learn to live with a different problem: partial (as opposed to exact or over) identification of parameters that will translate into uncertainty about answers to questions of interest. (But so what? Chuck Manski taught our micro colleagues how to deal with partial identification many years ago.) I admit that I’m not at all sure yet what such a change would mean in a practical sense – but I’m looking forward to finding out.

Vir: Narayana Kocherlakota

One response

  1. Le kakšen problem je bil v prevladujoči ekonomski misli ali pa v samih ekonomskih razmerah pred tem, ali pa v ekonomistih samih, da se je teorija racionalnih pričakovanj, sedaj (po krizi) pa proglašena kot primitivistično zablodna teorija v slikicah za otroški vrtec, tako zelo prijela. To vprašanje je še bolj zanimivo kot pljuvanje po teoriji, ki jo je do nedavna veliko ekonomistov – posebno pa še mlado ekonomisti, kjerkoli so že bili, tako zelo sprejemala.

    LP Igor

%d bloggers like this: