Showing posts with label macroeconomics. Show all posts
Showing posts with label macroeconomics. Show all posts

Wednesday, January 29, 2014

The best justification for IS-LM?

IS-LM models have always left me puzzled. To me, they are the equivalent to a reduced-form regression with omitted variables and endogeneity issues. Through a lot of hand-waving, you can have any model fit the data. But what I find the most bizarre is this strange obsession with justifying the IS-LM models from micro-foundations. Somehow, IS-LM is taken as an ultimate truth, and one needs to reverse-engineer it to find what can explain it. The ultimate truth is the data, not the model.

Pascal Michaillat and Emmanuel Saez bring us yet another paper that tries to explain the IS-LM model from some set of micro-foundations. The main ones this time are money-in-the-utility-function and wealth-in-the-utility function (and matching frictions on the labor market, which are not objectionable). I find it very hard to believe that by now anybody would consider this a valid starting point. Rarely does anybody enjoy simply having money, the reason why people like having money is that they can buy things with it, things that are already in the utility function, or that money facilitates transactions, something that you can easily model. The same applies to wealth. True, some people may be obsessed with getting richer just for being rich, but for the remainder of the citizen, they like wealth for what it brings in future consumption for themselves and their heirs, and for the security it brings in the face of future shocks. All this easily modelled in standard models.

It seems to me this paper is a serious step back. Macroeconomists try to understand why there are frictions on markets, so that one better determine the impact of policy on such markets. Simply sweeping everything in the utility function, where in addition one has a lot of freedom in choosing its properties, does not help us in any way. And it is wrong, because it is again some sort of reduced form that is not immune to policy changes. Suppose the economic environment becomes more uncertain. Are we now supposed to say that suddenly households like wealth more? They could also like wealth more because of changes in estate taxation or because of longer lifetimes, and these imply very different policy responses in better flushed-out models.

I just do not get it. Maybe some IS-LM fanboys can enlighten me.

Wednesday, November 27, 2013

Policy and academic macro

Academics are in their ivory tower and have little real world or policy impact. That is the view that is often conveyed by those who do not know what those academics are up to. It is also a common justification by policymakers for ignoring any advice coming from academia. I have lamented many times that politicians routinely disregard advice from scientists (including economists), particularly by focusing law-making on the means instead of on the goals. That said, I recently mentioned work that argued that Keynesian policies will always appeal more to policymakers than Hayekian ones, because it gives them a reason to do something in times of crisis.

Michel De Vroey compares rather Lucas to Keynes. Lucasian macroeconomics relies a lot on internal consistency. This disciplines the theory a lot, but this acts also a straitjacket that in unappealing to policy makers. Keynesian theory has a lot more hand-waving regarding consistency but seems to have an answer for everything because it can cut corners (even if answers may turn out to be wrong). The fact that it appears to be so flexible and know-it-all (like those "economists" that are willing to answer any question journalists may have, I would add) makes Keynesian theory a magnet for policymakers, especially in terms of crisis. And this is why, with the last recession, macroeconomics has been declared to be in crisis, because it listened to Lucas and not Keynes for three decades and did not always immediately have answers.

This is an argument written by someone in the ivory tower. Contrast this with someone involved in policymaking. The very recent interview of James Bullard seems to say the exact opposite. Policymakers, at least monetary policymakers, are very much looking at Lucasian theory for help. In his words, "there is still no substitute for heavy technical analysis to get to the bottom of these issues" (speaking of the financial crisis) and that is happening with structural, internally consistent modeling. Hand-waving does not cut it. And I agree.

Tuesday, November 5, 2013

The experimental macroeconomics of monetary policy

One important characteristic of Economics is that it is very difficult to conduct a clean experiment. While one may run little laboratory experiments with a few chosen subjects, there is always the uncertainty whether the experiment generalizes. The randomized experiments typically used in development economics are subject to the same limitations, even if their scope is larger. And in all those experiments, their applicability is limited to microeconomic questions.

Oleksiy Kryvtsov and Luba Petersen venture into experiments directly applicable for macroeconomic policy, and more precisely monetary policy. Monetary policy has bite when there are some frictions, among them expectation formation. Their idea is thus to see how people form inflation expectations in a laboratory setting and within the context of a standard new-Keynesian model. In that model with economic agents having rational expectations, monetary policy can reduce macroeconomic volatility by at least two-thirds. With the bit of irrationality exhibited by participants to the experiment, the reduction is still about half, and thus important. The model is a Woodford-style economy where participants have to provide updates on inflation and output-gap expectations, which can be compared by the observer against rational expectations ones. People learn about changes to fundamentals and can draw on past history. In other words, it is like they would live in the Matrix, they are fed information and are supposed to behave within the confines of a virtual world.

This is very interesting and innovative stuff here. I must concede though that I have still not bought the Woodford model. I cannot understand how one can talk about monetary policy in model with supposedly fundamentals when there is no money.

Tuesday, October 15, 2013

Memorable goods

In macroeconomics, one distinguishes between non-durable and durable consumption goods. This distinction is important, as the cyclical nature of the two is very different. Durables are very volatile, as households like to postpone their acquisition in recessions. Non-durables are extremely smooth, however. The later is what most models have in mind when thinking about consumption, while the first are more like investment goods, but at the household level.

Rong Hai, Dirk Krueger and Andrew Postlewaite think we should add a third category: memorable goods. These are non-durable goods that may not last long physically, but we keep good memories about them and thus they continue to provide utility in the future. In essence they are also durable goods, but they are not counted as such in national accounting. Some examples the authors provide are Christmas gifts whose memories last through the year. The same applies to vacations, going out, clothes, and jewelry. Using the consumption expenditure survey, the authors find that memorable goods lie somewhere between durables and non-durables in terms of cyclical properties. As they account for about 14% of outlays, their presence matters quantitatively. In fact, they can fully explain some observed deviations from the permanent income hypothesis. A paper to remember and cherish for a long time.

Thursday, September 26, 2013

The Lucas Critique, DSGE models and the Phillips Curve

The Lucas Critique in 1976 has been a major motivation behind the building of RBC models, the folow-up DSGE models as well as the structural estimation of these models. The idea was that estimating reduced-form elasticities was not immune to policy variations, and those elasticities were being estimated to determine the impact of policy in the first place. The resulting bias reduced the trust in the Philipps curve. The structural models, however, had and still have at their core supposedly invariant parameters that describe some fundamentals of the economy. But it turns out that some of those are not invariant over time. I recently discussed the case of the labor income share (here). And misspecification can be problematic in the estimation of such structural models, with possibly important consequences.

Samuel Hurtado tries to sort that out by including parameter shifts in the estimation of a standard DSGE model, but misspecifying it in such a way that it ignores this shift. Using data form the 1970s, he then shows that the policy responses from his model look surprisingly close to those of a reduced-form Philipps curve. In other words, it seems the DSGE model without parameter shift is just as misspecified as the old Philipps curve. What this means is that one has to either include ad hoc parameter shifts or that one needs to go even deeper in the fundamentals to understand why and how these parameters shifts are occurring. The latter gives an even stronger meaning to the Lucas Critique.

Tuesday, May 14, 2013

Reduced form welfare

It is not uncommon to find theory papers that assume quadratic utility or loss functions. They are the most tractable functions that allow to find an optimum, yet there is no reason to believe they have anything to do with reality. If you are designing an optimal policy where trade-offs are important, the results hinges quite a bit on the functional forms you choose.

Jasper Lukkezen and Coen Teulings look at optimal fiscal policy and go a step further. They attach a VAR (vector autoregression) to a quadratic welfare function. Not only do they assume an analytically tractable but very likely unrealistic welfare function, they also assume the rest of the economy is entirely linear with relationships that are policy invariant (it is a VAR). For their application, welfare is determined over GDP and the unemployment rate, which may be fine to determine the loss function of a policy maker but has nothing to do with the well-being of economic agents. They care about risk, uncertainty, consumption and time off work, all of which are absent from the model. Hence I do not really understand what the results mean, especially as the optimal policy rules are all over the place. A very confusing paper.

Thursday, March 28, 2013

Is money a factor of production?

An easy trick question to ask students about factors of production is whether money is one. Of course it is not, unless you consider burning it to fuel an oven. A factor of production is an input to the production process, such as capital, labor, raw materials, energy, etc. Money is only a facilitator in the acquisition of those goods. And if money or credit are constraining production, this belongs in a separate constraint, not in the production function.

Why do I mention this? Because money is occasionally put in a production function, and Jonathan Benchimol makes it even the focus and title of his paper. Why does he do that? He wants to estimate a New-Keynesian model and see whether money would matter in such a way. It does not. But who could really blame him for trying, as these models either have money in the utility function (few people enjoy money per se, most people enjoy what you can do with it, and that is already in the utility function) or no money at all (at still manage to draw lessons for monetary policy). In the kingdom of the blind men, those who are blessed with one eye are kings.

Saturday, February 9, 2013

And what if the Fed were to make a loss?

Whether you think the Fed's actions have been successful or not in pulling the US out of a deeper recession, you have to admit that the gigantic increase in its balance sheet has been hugely profitable. US$88,900,000,000.00 last year. US$79,300,000,000.00 the year before. Despite what conspiracy theorists want to believe, this money is not going into the pockets of private bankers, but to the US Treasury, which is coming to rely on it in these trying budgetary times.

But these profits are not going to last forever. When the economy is going to do better, interest rates will have to be brought to saner, normal levels. And this is going to happen by selling the assets the Fed has accumulated, and this is going to happen at a loss, a substantial loss. Who is going to pay for it. Indeed, the Fed is not provisioning for losses, first because it never made a loss, second possibly because the law may prevent it from doing so. So if it makes a loss, what is going to happen? I could just print money to cover it, but that would run counter the very policy it is trying to implement. Or the US Treasury could cover the losses. I am not quite sure that it stands ready to do so. And in such a circumstance, conspiracy theorists would have a field day.

I have not seen anybody mention anything about the exit strategy of the Fed. So this is all personal conjecture. Am I missing something? The only positive aspect I see in this is that this seems to be an interesting revenue smoothing mechanism for the government. Or, once more, the Fed doing fiscal policy instead on the government.

Wednesday, January 23, 2013

Reconciling macro and micro estimate of the Frisch labor supply elasticity

If you manage to get a labor economist and a macroeconomist to talk to each other, invariably the conversation will turn to the fact that macroeconomists use a Frisch elasticity of labor supply that is much too high to what microeconomic labor studies seem to reveal. And both will absolutely stand their ground and treat the other with disdain. This mutual frustration has gone for a long time as it has been extremely difficult to reconcile micro and macro estimates. One solution to this is to accept that they are going to be different, even when estimated wit the same dataset (see previous post). But that does not seem to have settled the debate.

William Peterman points out that microeconomic estimates are usually drawn from a sample of white married male heads of household. These are probably the least flexible in the labor force, so you should not be surprised that their working hours are little affected by wage changes. It is a different story for females and dependents, and once you include them in the sample, voilà you have the typical macro estimate of the elasticity, especially when the extensive margin (work or not work) is taken account of. In other words, macro and micro labor people are not talking about the same elasticity, so no wonder they are not getting the same estimates.

Thursday, January 17, 2013

Large GDP shocks are permanent

Fiscal policy being in complete disarray and unpredictable in the United Sates, economic policy is currently limited to monetary policy. But even there, it is not blindingly obvious what the Federal Reserve should do. If economic activity is below potential (and is forecasted to remain so beyond the "long and variable lags" it takes for monetary action to have an impact), the monetary easing is in order. Define potential, and there disagreement starts. If you look at the evolution of GDP, you cannot help thinking that it went through a permanent downward shift and is now tugging along at the usual growth rate, simply a step below. This would mean we are ready to get off the zero interest rates:



That would go against the idea that there are no permanent shifts in GDP. But while there are usually no such shifts, maybe there are rare circumstances where they happen. Mehdi Hosseinkouchack and Maik Wolters test the unit root of US GDP not only at the conditional mean but also at the tails of the distribution using a quantile autoregression based unit root test. Ad they find that sharp declines in output, like the one we recently experienced, do indeed look permanent. We should therefore not expect GDP to get on the previous path, and this not treat the latter as our current potential GDP. Would the FOMC believe this? I doubt it.

Thursday, November 8, 2012

Here we go again: ABM versus DSGE

The last recession has lead to a lot of questioning about the economics profession and in particular macroeconomics. Sadly, a lot of this is rather ill-informed, including from within the profession. It is a fact that the most popular models have unique equilibria, but I remain unconvinced that what we have observed is the economy switching from one equilibrium to the other. Models that include no significant banking sector were the norm, but others were available when the need came, and more were quickly developed.

But what irks me most is that because DSGE models have somehow failed, they would need to be scrapped altogether and replaced by agent-based models, as last argued by Giorgio Fagiolo and Andrea Roventini. First, you want to improve on what we have, not throw out the baby with the bathwater. When the AIDS epidemic erupted, did we throw the whole medical profession under the bus and start from scratch with a new medical paradigm (say, aromatherapy)? No, we devoted a lot of resources to understand what is happening with the current scientists using their established methods. With this recession, some vocal people have called to scrap most of existing macroeconomics, defund research and even defund statistical agencies that try to understand what is going on.

Second, the claim that agent-based computing is any better is ludicrous. Modern macroeconomic theory is abundant in models with heterogeneous agents, learning, information issues, and perturbations. While agent-based models can potentially offer all this, they do it in a very ad hoc fashion. the modeler has to set how agents behave, and you can obtain virtually any results depending on what you assume. Sadly, there is very little effort devoted to relate the assumption with anything found in reality. The calibration based on some data is virtually absent, thus nothing can be learned. Unless you put some discipline in those models, they are useless. And once they have that discipline, I guess they are going to be quite close to heterogeneous DSGE models.

PS: Beyond having an old-fashioned view of the standard DSGE model, Fagiolo and Roventini portray the DSGE model as a three equation IS-LM model with Calvo pricing. That is definitively not a DSGE model. And ABM models are only now starting to include a very simple banking sector. So much for claiming to be able to answer current questions.

Wednesday, October 31, 2012

How to measure the monetary stance when the interest rate is zero

The United States have interest rates close to zero, and it will stay like this of a few more years according to the Federal Reserve. This has also been the case in Japan for more than a decade. When the interest rate is not informative, it because very difficult to establish when the central bank policy is tight or loose. A Taylor rule may tell you that it should have negative interest rates, but because they cannot be negative, we cannot measure the impact of the unconventional tools the central bank may have used.

Leo Krippner finds a way to tease the monetary stance out of the yield curve. The issue is that the interest rates cannot go negative no matter what the central bank does because there is always the option to hold cash instead of bonds. This gives Krippner two ideas. First, one can then decompose a bond into an option to hold cash and another security, which may have a negative return (a shadow interest rate). The return of this security measures the monetary stance. The second is that the yield curve can help in pricing the option. For example, if long yields are very low the option has a lot more value than if the yield curve is steep.

This decomposition is then executed for the United States. The results are quite fascinating. For example, over the past five years, the shadow interest rate is at -5%, meaning that the Fed is doing a lot to help the economy. Whether this is enough is another question, but it does not look like it is just doing nothing effective. Also, one can easily match movements of the shadow interest rate with actions of the Fed. Sadly, these actions seems to have rather short-lived impacts, beyond keeping the shadow interest rate at roughly -5%.

Monday, October 22, 2012

To log-linearize or not to log-linearize?

Some recent research has shown that there is a free lunch lying there for fiscal policy when interest rates are constrained by the zero lower bound, in particular Eggertsson-Krugman and Christiano-Eichenbaum-Rebelo: the fiscal multiplier is larger than one and a tax rate cut leads to an increase in employment. But there is also a fundamental principle in Economics: always be suspicious of free lunches.

Anton Braun, Lena Mareen Körber and Yuichiro Waki show that the research above is all humbug. The way these new-Keynesian models are built is by log-linearizing around a steady-state with stable prices. There are two problems with that: 1) the fact that prices do change implies that there is a resource cost in these models due to either price dispersion or menu costs, depending on how you model the source of price rigidity; 2) log-linearization by definition implies a unique equilibrium. The sum of the two means that the extent literature has been approximating around the wrong steady-state and possibly looking at the wrong equilibrium.

Why? The cost of price change alters the slope of the aggregate supply, and this depends on the size of the shocks hitting the economy, once you looks at a non-linear solution of the model. Policy outcomes then look much more like those from an environment where there is no zero lower bound for the interest rate. That is, a tax increase reduces employment and the fiscal multiplier is close to one. To possibly get the other, more published result, one needs to have a price markup in the order of 50%, which is wildly unrealistic.

What this shows is that linearization is a nasty assumption, especially when a non-linearity is central to your case. Also, this highlights that the models punt too much on why prices are rigid. Simple rules are not sufficient. But regular readers of this blog already knew that.

Wednesday, September 26, 2012

When state-dependent pricing dominates time-dependent pricing

It should by now be obvious to anybody who has been following this blog that I do not like time-dependent pricing (aka the Calvo fairy) because it is applicable only under specific circumstances. Of course, this means that it is routinely abused because it is analytically convenient. What would it take for people to finally abandon this modeling strategy that influences so many results in the literature? The following paper?

Peter Karadi and Adam Reiff look at yet another micro-dataset (impact of large VAT changes in Hungary) and find that prices are flexible. Nothing new here. They find also that in response to large shocks, prices react in an asymmetric fashion, depending on whether the shocks are positive or negative. Calvo models are notoriously inadequate to deal with large shocks, by construction, but they are also not capable of getting any asymmetry. Take a state-dependent pricing model, and it can easily replicate these features of the data. Karadi and Reiff thus declare a clear winner among menu cost pricing models. I am not sure I would rule out information or search frictions like they do, however.

Wednesday, September 19, 2012

Prices are even less sticky when looking at households

Whether prices are sufficiently rigid to matter is an somewhat unsettled debate I have occasionally discussed on this blog. This is mostly a measurement issue. Once we know how it is in the data, we should know what model to use and how to calibrate it. The models that use price rigidity, the New-Keynesian, usually apply the concept in a rigid manner. For one, they use the much criticized Calvo fairy assumption. But they also assume that the same rigidity applies to the prices at which the firms sell their goods and to the prices at which households (or other firms) buy their goods.

As Olivier Coibion, Yuriy Gorodnichenko and Gee Hee Hong show, the reality is quite far from this last assumption. They use data from retailers in various US metropolitan areas which comprises both prices and quantities. They find that household switch easily retailers and take advantage of temporary sales. How much they do this varies through the business cycle, with an interesting implication: the effective household inflation rate is lower when unemployment is high. This implies that monetary easing becomes more difficult because household-level prices adjust 2.5 times faster than retailer-level prices.

The impact on the response of the output gap after a monetary stimulus is, however, almost identical to a model without retail switching. I suspect this is due to the fact that the model still uses Calvo pricing. Beyond the usual criticism of this assumption, here it also implies that retailers cannot take into account that households may want to switch retailers more frequently in a recession. Too bad that a very interesting result is poorly applied in this study.

Friday, August 10, 2012

Flexible-price inflation and monetary policy

Are prices flexible or not? There is no doubt that there is some rigidity. But does it actually matter? This debate in the macroeconomic literature has surprisingly side-stepped an important aspect of price rigidity: not all prices are equally rigid. While this has been empirically demonstrated many times, it never really made it into a model.

Stephen Millard and Tom O’Grady take a standard two-sector model and label one sector "sticky-price" and the other "flexible-price." They go through the usual motions of assessing their model and find that it can account for much of what is going on in the economy. But more interesting is their idea that there is a lot of useful information in looking at the inflation rates in the two sectors. Flexible prices react faster to current output gaps, while sticky ones should contain more information about expectations on future conditions, in particular inflation. And it should not be too hard to compute these statistics with current data collection. We already have several works that categorize goods by price stickiness.

Wednesday, July 18, 2012

How to report Fed uncertainty

Any good research reports qualifications and uncertainties about the results. Unfortunately, the non-scientific reader is not interested in those, he wants certainties. A good example is the issue of global climate change. Of course there is some uncertainty about it, but climate scientists did not report it because the public would otherwise discredit their findings. And indeed, once it was "revealed" global climate change is not 100% sure, an uproar resulted. It is not different in Economics.

Recently, the US Federal Reserve started publishing the differences of opinion of its FOMC members regarding its forecasts. There are good reasons for that. The market needs to understand when policy may change course because the data do not tell us enough about the future course. But will the public actually listen to this piece of information? Ray Fair does not think so because the wrong information is disseminated. Indeed, the dispersion of median forecasts has rather little information, especially if there is the group think the Board is sometimes accused of, compared to the statistical variance in the forecasts of a single model. Using historical errors as a basis, one could simulate whatever model(s) the Fed uses, draw out for each potential future history policy reactions, and then report the dispersion in future federal fund rates. Much better than the current dispersion of median opinions. And maybe the public will look at it.

Thursday, June 7, 2012

What is wrong with European central banking: the view from Cyprus

These are times where policy coordination between central banks and fiscal authorities seems to be particularly welcome. For one, monetary policy, which seems to be the only sustained and coherent policy, cannot right the ship beyond the short term, and the short term is shorter than the current crisis. For two, fiscal authorities are completely stuck in political wars exactly at the wrong moment. Upcoming elections in Europe and the United States certainly do not help in that. Central bankers have been rather frustrated with the political climate, yet they are still willing scape goats to deflect the furor of the public about unpopular policies.

But sometimes, enough is enough. One such case has been the open criticism of the central banker of Cyprus, who railed against the ineptitude of its government which has completely ignored his advice. Cyprus may not seem a big deal, yet it is a major banking center that may go down with Greece depending on how things unravel there. And this central banker, whose mandate was not renewed, is also not a nobody, as he was previously a senior official at the Board of Governors of the US Fed.

In probably his last paper while in Cyprus, Athanasios Orphanides summarizes all what is wrong with central banking in Europe (Cyprus is part of the monetary union). He recognizes that banking supervision must be taken much more seriously by central bankers, as the stability mandate that was typically meant for prices and sometimes employment or output is now interpreted to include the financial sector as well. Of course, this implies that central banks need to take more responsibilities in supervising the financial all the way to regulating individual institutions, an authority they do not always have at this point. But foremost, Orphanides argues that the biggest liability is economic governance. This is especially important within a monetary union where several governments need to agree. A more uniform fiscal policy would help tremendously, especially when monetary policy, in its more rudimentary form, is applied uniformly across the union. Worse, problems from fiscal policies that are not sustainable in the long term are magnified in a monetary union. You need rules and you need to adhere to them. Politicians are rather bad at this. Central bankers much better.

Friday, March 30, 2012

Do people know what the Taylor Rule is?

The current doctrine in monetary policy is that central bank need to be very clear about their policy rule so that economic agents can form good expectations about the future path of interest rates and inflation. Whether this communication policy can be questioned, at least from anecdotal evidence, as many people are currently convinced that inflation is currently close to double digits and we are heading towards hyperinflation. But that is only anecdotal. There is better evidence, from the Michigan Survey on Consumers, which ask questions about expectations of future economic conditions.

Carlos Carvalho and Fernanda Nechio ask whether those expectations are consistent with the Taylor Rule that underlies much of monetary policy (at least when the nominal interest is not bound by zero). It turns out that by and large, they are, expect for those with lower education. Surprisingly, the Survey of Professional Forecasters yields less consistent predictions of interest rates and inflation, as if those professional were not believing in the Taylor Rule. It is rather puzzling that the public knows better the monetary policy than the professionals, or is it that the Fed manages to fool the general public, but not the forecasters?

Monday, March 26, 2012

The perpetual lag of macroeconomics teaching

When it comes to teaching, nobody likes revamping lecture notes and reforming a curriculum. This is especially true when one is oneself not really conversant in the new material. While I think a Economics PhD should be able to teach almost any undergraduate Economics class, one is still drawn to the path of least resistance and teach only what one knows, even when this is outdated. One consequence of this is that undergraduates get to learn what the profession discredited sometimes decades ago. Nowhere is that more true than in Macroeconomics, which went through a transformation in 1970's and 1980's that to a large extend shelved IS/LM, yet the latter is still the core of undergraduate teaching. The fact that those teaching this today were taught IS/LM is the prime reason, and the textbook writers accommodate this.

Some have called current macroeconomic theory wrong with the current crisis and thus there would be the need to a change in research paradigm and thus also teaching. I am not sure about this claim, I would rather call macroeconomic research before the crisis incomplete rather than wrong. As to the teaching reform, that will take ages. One way to the someway fix the broken IS/LM model to make it more amenable to current events, like Peter Bofinger tries by introducing involuntary unemployment that does not necessarily come from wage rigidity. There have been other such attempts, but frankly, they just make the model even less believable and impossible to teach. The true reform should be to drop IS/LM entirely from the undergraduate classroom, except for History of Economics classes.