Blanchard (IMF) o napačnih makroekonomskih modelih in pravih nevarnostih tam zunaj

Olivier Blanchard, je tisti glavni ekonomist IMF (sicer pa profesor ekonomije na Harvardu), ki je pred dvema letoma priznal, da je IMF priporočal napačno politiko glede varčevanja, ker so njegova priporočila temeljila na napačnih ocenah fiskalnih multiplikatorjev. Zato je IMF umaknil sporna priporočila glede hitre fiskalne konsolidacije in razdolževanja. IMF nas je nato še nekajkrat prijetno presenetil, denimo s priznanjem, da javni dolg ne zmanjšuje gospodarske rasti, da neenakost zmanjšuje gospodarsko rast, redistribucija pa ne itd.

V septembrski številki Finance & Development se je Blanchard dotaknil ameriških makroekonomistov in njihovega benignega odnosa do kriz, kar je po njegovem posledica dvojega: (1) napačnih makro modelov, temelječih na racionalnih pričakovanjih in posledičnega napačnega načina razmišljanja o krizah kot kratkoročnih oscilacijah, in (2) 25 let dolgega obdobja relativno mirne gospodarske dinamike v razvitih državah, kar nas je vse uspavalo. Blanchard zato priporoča diverzifikacijo modelov, upoštevanje nelinearnosti, vključitev financ v makro modele. Drugi del Blanchardovega zapisa pa se nanaša na “temne kotičke” tam zunaj, na katere nismo bili pozorni, in na ustrezne makroekonomske politike, da bi zlezli ven iz teh kotičkov v bolj varne vode.

Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment.

The benign view reflected both factors internal to economics and an external economic environment that for years seemed indeed increasingly benign.

Start with internal factors. The techniques we use affect our thinking in deep and not always conscious ways. This was very much the case in macroeconomics in the decades preceding the crisis. The techniques were best suited to a worldview in which economic fluctuations occurred but were regular, and essentially self correcting. The problem is that we came to believe that this was indeed the way the world worked.

To understand how that view emerged, one has to go back to the so-called rational expectations revolution of the 1970s. The core idea—that the behavior of people and firms depends not only on current economic conditions but on what they expect will happen in the future—was not new. What was new was the development of techniques to solve models under the assumption that people and firms did the best they could in assessing the future. (A glimpse into why this was technically hard: current decisions by people and firms depend on their whole expected future. But their whole expected future itself depends in part on current decisions.)

These techniques however made sense only under a vision in which economic fluctuations were regular enough so that, by looking at the past, people and firms (and the econometricians who apply statistics to economics) could understand their nature and form expectations of the future, and simple enough so that small shocks had small effects and a shock twice as big as another had twice the effect on economic activity. The reason for this assumption, called linearity, was technical: models with nonlinearities—those in which a small shock, such as a decrease in housing prices, can sometimes have large effects, or in which the effect of a shock depends on the rest of the economic environment—were difficult, if not impossible, to solve under rational expectations.

Thinking about macroeconomics was largely shaped by those assumptions. We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time. Instead of talking about fluctuations, we increasingly used the term “business cycle.” Even when we later developed techniques to deal with nonlinearities, this generally benign view of fluctuations remained dominant.

This state of affairs, however, would not have developed (or at least not lasted for so long) without external factors playing a role. The state of the world, at least the economic world, provided little impetus for macroeconomists to question their worldview.

From the early 1980s on, most advanced economies experienced what has been dubbed the “Great Moderation,” a steady decrease in the variability of output and its major components—such as consumption and investment. There were, and are still, disagreements about what caused this moderation. Central banks would like to take the credit for it, and it is indeed likely that some of the decline was due to better monetary policy, which resulted in lower and less variable inflation. Others have argued that luck, unusually small shocks hitting the economy, explained much of the decrease. Whatever caused the Great Moderation, for a quarter century the benign, linear view of fluctuations looked fine. (This was the mainstream view. Some researchers did not accept that premise. The late Frank Hahn, a well-known economist who taught at Cambridge University, kept reminding me of his detestation of linear models, including mine, which he called “Mickey Mouse” models.)

The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too.

The Great Moderation had fooled not only macroeconomists. Financial institutions and regulators also underestimated risks. The result was a financial structure that was increasingly exposed to potential shocks. In other words, the global economy operated closer and closer to the dark corners without economists, policymakers, and financial institutions realizing it.

Turning from policy to research, the message should be to let a hundred flowers bloom. Now that we are more aware of nonlinearities and the dangers they pose, we should explore them further theoretically and empirically—and in all sorts of models. This is happening already, and to judge from the flow of working papers since the beginning of the crisis, it is happening on a large scale. Finance and macroeconomics in particular are becoming much better integrated, which is very good news.

But this answer skirts a harder question: How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models that we use, for example, at the IMF to think about alternative scenarios and to quantify the effects of policy decisions? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?

Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.

Vir: Olivier Blanchard, Where Danger Lurks

%d bloggers like this: