Wednesday, October 31, 2012

How to measure the monetary stance when the interest rate is zero

The United States have interest rates close to zero, and it will stay like this of a few more years according to the Federal Reserve. This has also been the case in Japan for more than a decade. When the interest rate is not informative, it because very difficult to establish when the central bank policy is tight or loose. A Taylor rule may tell you that it should have negative interest rates, but because they cannot be negative, we cannot measure the impact of the unconventional tools the central bank may have used.

Leo Krippner finds a way to tease the monetary stance out of the yield curve. The issue is that the interest rates cannot go negative no matter what the central bank does because there is always the option to hold cash instead of bonds. This gives Krippner two ideas. First, one can then decompose a bond into an option to hold cash and another security, which may have a negative return (a shadow interest rate). The return of this security measures the monetary stance. The second is that the yield curve can help in pricing the option. For example, if long yields are very low the option has a lot more value than if the yield curve is steep.

This decomposition is then executed for the United States. The results are quite fascinating. For example, over the past five years, the shadow interest rate is at -5%, meaning that the Fed is doing a lot to help the economy. Whether this is enough is another question, but it does not look like it is just doing nothing effective. Also, one can easily match movements of the shadow interest rate with actions of the Fed. Sadly, these actions seems to have rather short-lived impacts, beyond keeping the shadow interest rate at roughly -5%.

Tuesday, October 30, 2012

Economic growth with egoistic dictators

Western nations, and especially the United States, put a lot of effort into nation-building and democratization in developing economies. The premise is that democratic countries are more peaceful, and they grow richer thus opening new markets. The problem is that if there is a correlation between democracy and the wealth of nations, the causation actually goes the other way: wealth brings democracy. Then if we could pick dictators to promote growth, which should they be? The benevolent dictator may be an elegant theoretical construct, he is difficult to find in reality.

Giacomo de Luca, Anastasia Litina and Petros Sekeris look at high unequal societies and find that in some circumstances autocratic dictators actually generate more growth than democracy (where the median voter is the dictator). The key is that dictator needs to be very rich, so much so that his interests overlap significantly with those of the country, and thus also the interests of other capital owners, as long as capital-ownership is highly concentrated. A wealthy median voter, though, prefers democracy. The logic is that redistribution lowers growth, thus rich capital owners always prefer dictators. With a lot of inequality, the median voter is poor and wants redistribution, this yields low growth. But if the median voter is relatively rich, she wants little redistribution and we have more growth, even more than with the dictator (who still has kleptomaniac tendencies, after all).

Monday, October 29, 2012

How to infer the value of time from gasoline prices

How much do people value time? One way to look a this is to consider how much they need to be compensated to work. But this masks the disutility of effort, boredom, future payoffs, etc. And the value of time changes through the day (do not tell me lawyers reason all day in billable minutes). In other words, the value of time varies through time and from person to person. Hence it is important to get many estimates.

Hendrik Wolff finds a subtle way to estimate it. Looking at fluctuations in gasoline prices and the speeding behavior of car drivers, using speed data from an uncongested flat portions of highway in rural Washington State. The elasticity of -0.01 is very low, but from calculating the time gained through speeding, one can value time at about half gross wage. That is lower than previous estimates that were using congested highways, and frustration may have been priced in those. The Wolff estimates are cleaner, and they provide a logical negative relation between gas prices and speeds, a relation that was positive in studies tainted by congestion or other external factors.

Friday, October 26, 2012

How important is Delaware as a tax haven?

The larger OECD countries have increasingly complained that some of the smaller member countries are hurting them because of their tax havens. While I have argued before that these tax havens have some benefits, foremost to provide tax competition to keep the others with reasonable bounds, there is no doubt that they hurt various efforts in collective action. But there are tax havens also within the borders of the complainants, and the prime example is Delaware, a small state that tries to attract corporations and financial companies through low taxes and little regulation.

Scott Dyreng, Bradley Lindsey and Jacob Thornock show that if regulatory concerns have been shown in previous literature to be an important determinant in the Delaware location choice of companies or their subsidiaries, the tax haven status of the state is even more relevant. If a firm adopts a so-called Delaware tax strategy, it can save 15 to 24% in state taxes, which amounts to increasing after tax income on average by 1 to 1.5%. Given profit margins, this is huge. It is then no wonder that Delaware is often mentioned in the same breath as other world-renowned secretive tax havens. Domestically, the paper shows that the tax savings have decreased, because the other states have either tried to close loopholes or have lowered their tax rates. But there is surprisingly little international policy reaction to this, and I wonder why.

And for Europeans, a significant lesson to be learned from Delaware is that if you want to adopt a formula apportionment system, make sure that its formulas and rates do not differ across countries, or you will have some Delaware that exploits some loophole.

Thursday, October 25, 2012

Why is research about Bulgarians' sadness so sad?

We know the saying that money does not buy happiness. But even if there is a significant correlation between income and measures of satisfaction, it is far from one, with some pretty significant outliers. One of them is Bulgaria, whose inhabitants
are among the saddest in the world despite being "middle income."

Hernando Zuleta and Maria Draganova claim this can be explained by a set of factors: it is too cold in the winter, but I would counterargue that this is also valid for Bulgaria's neighbors. Health matters, too, but Bulgaria fares better than its neighbors, in particular Russia (where it is even colder). Maybe the sadness can be explained by the low proportion of young people, or the lack of upward mobility of family income, or the increased inequality, or the collapse of incomes in the 1990's. But again, this all applies to the neighboring countries as well. This sounds all very speculative, but this is really what the paper writes about. One can put all this is a regression and see whether it actually matters. And if Bulgaria is still an outlier, blame culture. Which the authors actually do.

Wednesday, October 24, 2012

Resit exams are a bad idea

What should be the best design for important exams? One attempt and you are out? Resit allowed once? Twice? Maximum number of attempts over all exams? If you ever have to go through a meeting about this type of rules, you will be surprised how opinionated people are about this. I do not think it is because they can base their views on hard evidence, but rather that they like the system they (successfully) went through themselves in their studies.

Peter Kooreman asks whether allowing students to resit on failed exams within the same academic year makes them learn more, the ultimate objective of an exam. His exercise is theoretical and looks at a student who does not like working but wants to pass. The probability of passing an exam depends on the effort. If there is only one exam, the student provides more effort than for the first exam in a set of two chances. For the second exam, the effort should be equivalent to the lone exam. Thus with two exams, the probability of passing is higher. Effort is lower, and likely much lower. After all, some students get through the first exam with luck and little study while the others get seriously about it only one the second exam.

This short study misses a couple of important ingredients, though. The first is that it assumes that students are risk neutral. My casual empiricism tells me that students are quite nervous about exams, indicating quite a bit of curvature. Also, students have a clear preference for passing exams earlier than later. Risk aversion increases the distance between the two exam schemes. Impatience reduces it and could change the ordering. Finally, it would be great to see some empirical evidence. But the latter is likely asking for a bit too much here.

Tuesday, October 23, 2012

Better educated twins live longer

What is the secret to a long life? A healthy life style, good nutrition, few worries, few risks, little stress. That is all obvious. When you look at data, life-span has a very high correlation with income, education, and other measures of standard of living. Of course, none of this implies causation, if fact one may think that an expected longer lifespan leads people to get more education. Enter the twin studies.

Petter Lundborg, Carl Hampus Lyttkens and Paul Nystedt use twins, who have the same genes and where subject to the same starting conditions, to disentangle what leads to longer lives. Education comes here as a clear winner. Even differences between twins like birth weight cannot make this disappear. Having more than 12 years of education gives you a bonus of 2-3 years, whether male or female.

Monday, October 22, 2012

To log-linearize or not to log-linearize?

Some recent research has shown that there is a free lunch lying there for fiscal policy when interest rates are constrained by the zero lower bound, in particular Eggertsson-Krugman and Christiano-Eichenbaum-Rebelo: the fiscal multiplier is larger than one and a tax rate cut leads to an increase in employment. But there is also a fundamental principle in Economics: always be suspicious of free lunches.

Anton Braun, Lena Mareen Körber and Yuichiro Waki show that the research above is all humbug. The way these new-Keynesian models are built is by log-linearizing around a steady-state with stable prices. There are two problems with that: 1) the fact that prices do change implies that there is a resource cost in these models due to either price dispersion or menu costs, depending on how you model the source of price rigidity; 2) log-linearization by definition implies a unique equilibrium. The sum of the two means that the extent literature has been approximating around the wrong steady-state and possibly looking at the wrong equilibrium.

Why? The cost of price change alters the slope of the aggregate supply, and this depends on the size of the shocks hitting the economy, once you looks at a non-linear solution of the model. Policy outcomes then look much more like those from an environment where there is no zero lower bound for the interest rate. That is, a tax increase reduces employment and the fiscal multiplier is close to one. To possibly get the other, more published result, one needs to have a price markup in the order of 50%, which is wildly unrealistic.

What this shows is that linearization is a nasty assumption, especially when a non-linearity is central to your case. Also, this highlights that the models punt too much on why prices are rigid. Simple rules are not sufficient. But regular readers of this blog already knew that.

Friday, October 19, 2012

Temp work does not lead to full-time work

Companies that refer temporary workers ("temps") like to advertise that temporary work with few benefits is a stepping stone to full-time work with full benefits. Whether they can back this up beyond anecdotal evidence is another question, and it may depend very significantly on the labor market in question. Sectoral practices differ widely, and regulation about temp work in some countries makes it actually difficult to convert temp work to full-time work. In fact, some agencies even try to prevent this, as they would lose their commissions.

Joakim Hveem looks at Sweden and finds that work intermediated through temp agencies actually decreases the probability of subsequent full-time work, as if a stigma were attached. While this negative effect seems to be caused by immigrants, this results is still disturbing, in particular for women who often rely on temp work as a stepping stone to regular work (they could instead try with volunteering). The only good thing that comes out of this study is that temp work at least keeps people out of unemployment. Note, however, that the study is only about temp work intermediated through an agency. As mentioned above, these agencies may have perverse incentives. Self-intermediated temp work may fare better for later outcomes.

Thursday, October 18, 2012

The evil of patents

The litigation saga between Apple and Samsung over minute and obvious details of their respective phones makes good entertainment but has little economic value. In fact, it highlights all that is wrong with the current state of the patent system. Sadly, textbooks still teach how temporary monopolies are beneficial for innovation, yet evidence is now overwhelming that they hurt the production of innovation, the use of innovation and well-being in general. I have blogged a few times about some cases and discussed some research in this regard.

Michele Boldrin and David Levine have written a nice summary on where we stand on the usefulness of patents. They have written a very good book about the topic a few years ago (buy it or download it for free, the authors are consistent with their message). The new working paper provides a shorter breviary with the main arguments for and (mostly) against patents, with a few updates from the literature and case studies. The major point is that despite a huge increase in the number of newly granted patents, there is no evidence of increases in R&D expenditures or total factor productivity. In fact, there is a growing consensus that technological progress is slowing down and we should get used to slower growth in the economy. Patent law and practice is at least partly responsible for this.

Wednesday, October 17, 2012

The end of central banking as we knew it

The European Central Bank and the US Federal Reserve have massively changed their balance sheet in recent years. The first step was an increase in size, the second is a change in the structure of the balance sheet, holding not only government bonds but also other securities. Some have called this last step "qualitative easing," and others have argued that it is inconsequential because the price of the securities internalize everything relevant, à la Modigliani-Miller (most prominently Michael Woodford, whose recent Jackson Hole paper is being treated like gospel).

A counter-argument comes from Roger Farmer. Qualitative easing is a quasi-fiscal policy, because it favors a particular sector through the purchase of its assets instead of the "neutral" government bonds. Also, it transfers risk form the seller to ultimately the tax payer. This is not only welfare-improving, as it allows to fine-tune the economy, it can even be Pareto-improving despite the fact that it implies redistribution. Indeed, the policy allows to smooth asset price fluctuations as if the yet to be born were capable of trading. It also removes unnecessary fluctuations in the stock market that are due to sunspots ("irrational exuberance").

The question, though, is why the central bank would be tasked with such operations. A central bank's role is to ensure the short-term health of the economy in general. Fiscal policy can redistribute across sectors and ensure a healthy long-term environment. The Fed and the ECB have resorted to such operations because of a general failure of fiscal policy. In the US, Congress is incapable of setting any sensible policy. In Europe, the EU cannot conduct fiscal policy because it has no taxation powers. Central banks are forced into a role they should not have, even if it looks optimal according to Farmer. But wait until lobbies, politicians and other rent-seekers try to influence qualitative easing.

Tuesday, October 16, 2012

Why women should volunteer

There is a strong cultural tradition of volunteer work in the United States, which is encouraged by many employers and pretty much a requirement if you want to get into good colleges and land scholarships. I am not quite sure why this is more prevalent than in Europe, possibly because Europeans expect the state to help, whereas in the US everyone is more on his own and can expect help from other individuals. Beyond social pressure, there may also be self-interest at work, though.

Robert Sauer shows that for women volunteer work can pay off. Imagine the following scenario: a woman drops from the labor force to have children and raise them for the first years (remember, there is no significant maternity protection in the US). During that period, she volunteers here and there, when her schedule allows it. Once the kids are in school, she rejoins the labor force. Using the PSID, Sauer finds that every year of volunteer work increase the subsequent wage by 2.4% in full-time work, and even 8.3% for part-time work. That is of course after controlling for the years lost in learning-by-doing. This is also done with a structural behavioral model, which allows to highlight why reduced-form regressions would have shown a negative effect: adverse selection is at play in that a different type of people is out of the labor force and into volunteering. The advice to female labor economists is thus: volunteer and estimate structural models.

Monday, October 15, 2012

Why should entrepreneurial income be taxed less progressively than labor income?

Capital income is typically less taxed, and there are good theoretical reasons to do so. It has mostly to do with the fact that you do not want to discourage the accumulation of capital, which increases wages. You also want to encourage entrepreneurship, which has lead to various tax credits fro entrepreneurs (plus the fact they have an easier time under-reporting income if they are self-employed). But should entrepreneurial income be taxed as progressively as other income?

Florian Scheuer says no, and it should be less progressive, which is actually how it is in the data. Indeed, business taxation is less progressive than taxation of individuals. This makes sense according to Scheuer because of adverse selection on credit markets. The argument is as follows. Individuals choose whether to become regular workers or entrepreneurs. The latter are not all equally suited for the job, and they need credit. There is adverse selection on the credit markets which leads to cross-subsidization from good risks to bad risks. Thus, too many bad risks enter entrepreneurship. Having a flatter tax schedule reverses some of this cross-subsidization.

Sunday, October 14, 2012

The Harvard Economics Department's Nobel problem

By all accounts, the Economics Department at Harvard University is the best department in world, to the point that it gets an almost perfect score on RePEc. Yet, despite such dominance, anyone on its faculty has not received a Nobel prize in a very long time. The last Harvard faculty with a Nobel, Amartya Sen, was hired five years after his 1999 Nobel Prize. The 1998 Prize, Robert Merton, was at the Business School. You have to go all the way back to Wassily Leontief in 1973 to find the next one.

What is the department's problem? One, it could be that it is populated with brilliant people, but the the exceptional ones who merit Nobel Prizes. Two, it could be that the standing of the department in the profession is overvalued. Three, it could be that the rankings are biased in some way, say that Harvard graduates like to cite their mentors. Four, maybe there is some curse.

The 2012 prize is going to be announced tomorrow. Which Harvard Economics faculty have a shot? The most cited economist is Andrei Shleifer. But his unethical behavior makes it impossible for him to get the prize. Robert Barro is also extremely well cited, but his citations are very often about proving him wrong. Alvin Roth is a serious candidate, but just left for Stanford (because of the curse?). Martin Weitzman is a candidate, but if environmental economics gets it, it should first go to William Nordhaus alone. That leaves us, in my mind with only three viable candidates: Oliver Hart, Elhanan Helpman and Martin Feldstein. Given the long list of viable candidates (say, Tirole, Milgrom, Paul Romer, Lars Hansen, Thaler, Robert Wilson, Nordhaus, Holmstrom, Fama, Dixit, Roth, Kiyotaki, Moore, Newhouse, Grossman, Ross, Rabin, Atkinson, Deaton, Shiller, Berry), the odds remain small.

Friday, October 12, 2012

How daylight saving time burns calories

Daylight savings time was introduced to save energy. Because humans who have access to electricity seem to have a natural tendency to stay up late at night and wake up when the sun is already out, changing clocks seem to coax them into moving their daily cycle closer to the natural cycle. As fewer lights are then on, one saves energy.

Hendrik Wolff and Momoe Makino show that humans compensate by using up more energy themselves. Looking at the period around which we need to adjust our watches one way or the other, they find that daylight savings time leads to an immediate drop in time spend in front of the television to the benefit of outdoor activities. And this healthy behavior leads to an increased calorie consumption, to the tune of about 200 calories a day. Hmm, I wonder whether similar results would be obtained by comparing people along a timezone border. And whether it would be worth moving daylight savings time by two hours instead of one.

Thursday, October 11, 2012

The latest on growth accounting

This the early days of the Solow growth model, we are accustomed to the decomposition of GDP growth into the contributions of capital, labor and the "Solow residual," sometimes interpreted as technology or total factor productivity. Roughly, each contributed a third, and this distribution has changed relatively little once measurements or models have been refined, for example in adding other forms of capital, such a human capital or public capital.

This is revisited by Robert Tamura, Gerald Dwyer, John Devereux and Scott Baier (with an impressive data appendix) who reassemble data from 168 countries and are especially carefully in constructing new estimates for human capital. Instead of adding years of schooling and possibly years of experience, they use a human capital production function that depends also on the parents' human capital and the world's frontier. In the end, the residual accounts for much less than previously, at most a third in some extreme specification, a tenth in the other extreme. The rest is split quite evenly between physical and human capital. What this means, is that technology, institutions and whatever else you want to throw in the Solow residual accounts for much less than we previously thought in per capita output growth and in differences of output per capita across countries.

Wednesday, October 10, 2012

China's biggest threat: its men

Many people in the Western world are afraid of China as a new economic superpower. To a large extend, this is because of a mistaken belief that the world economy is a zero-sum game, and any progress in China is to the detriment of currently rich economies. Of course, rich economies mostly benefit from China's growth, as it makes some goods cheaper and opens new markets. And the richer countries are, the less they will want to get into destructive wars, if this is what you are worried about.

If there is a threat, it is rather from within China. First, there is a huge number of undocumented internal migrants who do not have access to social services. Second, as Jane Golley and Rod Tyers describe, there is a time-bomb resulting from an imbalance in the sex ratio. The surplus of men leads families to save too much in order to compete for scarce women to marry their sons to. The imbalance is so strong that regular immigration or human trafficking are not sufficient. But the most striking menace comes from the discouraged single and low-skilled men, who could amount to a quarter of all men of reproductive age by 2030, who are prime candidates for a criminal life. And reverting such trends is going to be very difficult.

Tuesday, October 9, 2012

Voluntary pollution restrictions do not work

The literature on international environmental agreements has established that when such agreement are only of the self-enforcing kind (not imposed by a supranational entity), they cannot exceed three participants. That is certainly disappointing, as we would need much more than that to get significant impact. This literature, however, looked at these countries in a vacuum, in particular the only interaction they would have is through pollution. Now it turns out that in reality they also trade with each other, and trade policy is also available as an instrument.

This is how Thomas Eichner and Rüdiger Pethig expand the extant literature. The addition of trade allows more countries to participate in a self-enforcing agreement. But this comes at the cost of an agreement with significantly less bite. These are interesting results, but I have a hard time finding intuition for this, and the authors are not of much help. Consider this to be an appeal for clarification.

Monday, October 8, 2012

Mathematics, Econometrics and the top economist's career outcomes

Some people have been complaining about the increasing mathematization of econonmics and how it leads to a disconnect of economics with real life (which I suppose is void of mathematics). I would argue that this is actually a good thing, first because it forces you to have rigorous arguments, second because you often need quantitative answers to questions that result in qualitatively ambiguous outcomes, and third because it simply allows us to look at more complex problems. But, has mathematization actually increased?

Miguel Espinosa, Carlos Rondon and Mauricio Romero look at the publications of top economists and look at how many equations or econometric outputs per article they produced. The analysis for the last century shows a gradual increase of mathematization throughout, except for the number of equations which went through a serious recession in the 1980's. And of course, econometrics started only seriously in the 1950's. It also appears that professional success as measured by prestigious prizes is certainly linked to the use of mathematics, but not the econometric kind.

Friday, October 5, 2012

How to make a profit through price obfuscation

The standard model of competition with a homogeneous good tells us that every supplier will apply the same, low price. This situation of perfect competition arises because there is no good differentiation and suppliers cannot exploit any market power, because they have none. They end up with little profit.

Suppose they can now artificially differentiate the good. Ioana Chioveanu and Jidong Zhou say this could happen in the way the price information is presented, for example by omitting taxes, or slicing the price in pieces for various components of the service, like for some flights, hotel nights or shipping fees. All this leads to price obfuscation. firms then start to compete on price and price "frame." Just look on Amazon.com how participating resellers can offer very different prices by adding wildly different shipping pricing. Or how sellers may add coupons for future purchases. And there are plenty of other examples.

Chioveanu and Zhou that within such a framework various competitive equilibria can result because consumer may get confused and fail to buy the best deal. This is equivalent to saying consumers are irrational, and of course anything can then happen. What is more interesting is that there is still competitive pressure, as suppliers may converge to more transparent pricing. Equilibria are such that firm randomize both price and price frames. If more firms enter the market (as it supports more profits), it becomes more difficult to obfuscate prices using price frames, so firms make those even more complex, resulting paradoxically in more profits.

I think it has been very welcome that in the US airline ticket prices now need to include all taxes and fees. It would not hurt if other prices could also include all taxes, but as this model shows, this needs to be initiated by the government, as the industry will not do it voluntarily. And this also shows that there is some good to say about the European Union's efforts into standardizing goods and price frames.

Thursday, October 4, 2012

Why is there so little Economics in environmental policy?

A persistent frustration for economists is that policy makers do not listen to them, especially politicians. That they then accuse economists for bad outcomes is then the cherry on top. One area were has been the most evident is in dealing with the environment. Why is it that policy makers (and environmentalists) are reluctant to adopt market-based solutions to environmental issues?

Dallas Burtraw looks at the case of the United States and concludes that this all boils down to politics, and more precisely the federalist institutions. In other words, economists do not take into account the institutional framework where their policies need to fit in, a lament that has also resonated with implementers of development policies. In the case of environmental policy in the US, the issue is mostly that the states have to implement federal policy, and it is much easier to issue emissions caps, compared to coax them to manage honestly cap-and-trade market or collect carbon taxes. For example, consider electricity markets, which are heavily regulated in some states, which means the price signal has less value. Or a state may try to attract industries by circumventing the impact of environmental policies if they are price-based. All-in-all, a political economy of the environment is as much needed as environmental economics.

Wednesday, October 3, 2012

Why do Indian and Mexican plants not grow?

When a business is successful, it grows. Successful businesses also are more likely to survive and thus old business tend to be larger. Now define business. The business unit relevant for this is difficult to ascertain, for example because of mergers and acquisitions, or simply because the boundaries of a firm are hard to establish (are contractors part of it? Subsidiaries?). Thus the common way to measure a business is at the plant level.

Chang-Tai Hsieh and Peter Klenow show that plants in India and Mexico do not grow with age, or much less than, say, the United States. This points to a very inefficient allocation of resources, as successful plants should invest to grow: they are obviously better than the competition, thus can produce more efficiently or better goods. With a simple simulation exercise, Hsieh and Klenow show that this lack of plant growth leads to a 25% productivity loss compared to US plant growth.

The question is of course why plant growth is stifled in India and Mexico. For India, it is rather obvious, with policies that favor small businesses and actively prevent them to grow "too large." Many of these policies, which are heavily promoted by owners of incumbent plants, are being revoked and can in part be credited for the large recent growth of the Indian economy. As for Mexico, it has been suggested that when plants grow from the informal to the formal sector of the economy, they are subject to more taxes and regulations. In both cases, inappropriate institutions are to blame.

Tuesday, October 2, 2012

Disclosing hospital quality works

Usually, better information leads to better outcomes (well except for those people who could exploit some information asymmetry). People can better evaluate their options, and they do not like uncertainty. The impact of information on behavior, however, is ambiguous. For example, I recently discussed the case of a supervision contract that needs artificial uncertainty to be constraint-efficient. Disclosing the quality of some goods provided by the state, such as schools, is contentious. In that case, it turns out to be beneficial.

The same seems to apply for hospitals. Lapo Filistrucchi and Fatih Cemil Ozbugday study data from German hospitals when mandatory quality reports were introduced. This new policy improved overall quality, and more so in hospitals that were initially graded inferior. Those that were initially better had more patients thereafter, thus the public looked at those grades. And where there was more hospital density, the authors can see more quality improvements, indicating that competition is at work in beneficial ways. An improvement in overall well-being is thus likely (we cannot be sure, as the use of resources to reach quality improvements is not measured).

Monday, October 1, 2012

A negative discount rate for climate policy?

Ever since the Stern Review on climate change came out, the debate has raged about what the appropriate discount rate should be to evaluate future environmental outcomes. Biologists do not see the point of discounting, but they have a biased view as they advocate preservation at any cost (much like doctors advocate preserving every life at any cost). In any case, computing future benefits of something at an infinite horizon is impossible if there is no discounting and the benefits do not decrease.

Marc Fleurbaey and Stéphane Zuber actually advocate that we should use a negative discount rate. Technically, that it is only possible to calculate the net present value with a negative discount rate only if the periodic values decrease at a faster rate than the absolute value of this negative discount rate. The key here is not to think of the discount rate as a fixed number, but rather as a result from the ratio of marginal utility from successive periods (or generations). If future generations are worse off, the discount rate becomes indeed negative. Whether future generations will indeed be worse off is is difficult to ascertain, but it could happen. But this is not how we should think about this.

The proper measurement is to use the marginal utility of the person alleviating today the impact of climate change versus the marginal utility of the person in the future benefiting from this action in the future. This is a person-to-person calculation. The present one is likely rich, the future one likely poor. Thus the discount rate should results from the comparison of these marginal utilities is negative, especially because the developing economies are likely to suffer the most from climate change.