Much of the experimental economics literature is about finding situations where some of the fundamental axioms of rational utility theory are violated. And you are always going to find someone who does not act rationally. But this literature often understates how often people are actually rational and how this translates into better outcomes.
Syngjoo Choi, Shachar Kariv, Wieland Müller and Dan Silverman study the characteristics of rational people. Specifically, they conducted a field experiment on 1182 households in the Netherlands to find whether they behaved consistently with revealed preference theory. They then combine these results with a large array of socio-demographic and economic characteristics. They find that the more rational people are, the higher income and education they have. Nobody will be surprised to learn that men are more consistent, but I am shocked to see that young people are more rational. Why would life experience make you deviate from rationality? Finally, the impact of rationality on outcomes is substantial: a one standard deviation increase in consistency is associated with a 15-19% increase in wealth.
Thursday, March 31, 2011
Wednesday, March 30, 2011
Fraud cycles
The evolution of crime over time is much studied, and there is a lot of agreement that demographics are very important for many crime categories that are the "specialty" of young adults, like violent crimes. Fraud, however, cannot be tied to a particular age category, yet fraud statistics exhibit a remarkable cyclical pattern, a pattern that is not correlated across fraud categories or with the business cycle. What could give rise to such cycles?
Jiong Gong, Preston McAfee and Michael Williams come up with a theory that can rationalize these cycles. Once a lot of fraud cases make the news, people become more careful and new laws are put in place, which makes fraud more difficult. As fraud then disappears from the picture, people become less careful, and fraudsters find new and innovative ways to make money, and statistics show a comeback. That reminds me of privatization-nationalization cycles.
Of course, fraud statistics are not perfect. Indeed, they only measure fraud arrests, not fraud occurrence. One could argue that more people get arrested for fraud when victims are more vigilant, not less. That would be an entirely different story of fraud cycles.
Jiong Gong, Preston McAfee and Michael Williams come up with a theory that can rationalize these cycles. Once a lot of fraud cases make the news, people become more careful and new laws are put in place, which makes fraud more difficult. As fraud then disappears from the picture, people become less careful, and fraudsters find new and innovative ways to make money, and statistics show a comeback. That reminds me of privatization-nationalization cycles.
Of course, fraud statistics are not perfect. Indeed, they only measure fraud arrests, not fraud occurrence. One could argue that more people get arrested for fraud when victims are more vigilant, not less. That would be an entirely different story of fraud cycles.
Tuesday, March 29, 2011
Which are the most efficient universities?
University rankings are not particularly useful because they only measure the output and hardly the input. And if they measure the input, having more inputs is better. This means that rankings reflect size and resources, not how well resources are used, and how much students improve from when they started their studies. This implies that any university that does not have a medical school or an engineering school starts with a disadvantage. Internationally, university rankings have become very important, to the point that, for example, France is now reversing the splitting of its large universities in field specific institutions. The new monster universities, now again covering all fields, will rank much better thanks mostly to their sheer size.
To counteract all this, you need to measure the efficiency of universities. Thomas Bolli does this for 273 universities across the world by estimating a production possibilities frontier. Unfortunately, the sole measured input it full-time equivalents of staff, while possible outputs are FTE of undergraduate and graduate students, and citation numbers. But it is a start. Universities in Switzerland and Israel appear to be very efficient (and indeed they are small are generate a good amount of research) while those in the UK seem particularly inefficient. That should fan some flames in the debate on university financing there.
To counteract all this, you need to measure the efficiency of universities. Thomas Bolli does this for 273 universities across the world by estimating a production possibilities frontier. Unfortunately, the sole measured input it full-time equivalents of staff, while possible outputs are FTE of undergraduate and graduate students, and citation numbers. But it is a start. Universities in Switzerland and Israel appear to be very efficient (and indeed they are small are generate a good amount of research) while those in the UK seem particularly inefficient. That should fan some flames in the debate on university financing there.
Monday, March 28, 2011
Fertility differences and agricultural techniques
There are times when you read a paper and you really wonder how the authors came up with the idea to check out a particular correlation in the data, because it seems to be so far-fetched. But thus a correlation can be beautiful if it also has a nice theory that comes with it.
The correlation that Alberto Alesina, Paola Giuliano and Nathan Nunn study is between current fertility and adoption of plough agriculture in history. OK, I did not think about that one. But now that they find a nice positive correlation, how could one explain it? They argue that this has to do that women and children are not particularly useful when ploughing, as strength is required. The traditional task of weeding, that fell on women and children, is not necessary with ploughing. Thus, there is a preference for fewer children that is ingrained in the culture of these regions to this day.
The correlation that Alberto Alesina, Paola Giuliano and Nathan Nunn study is between current fertility and adoption of plough agriculture in history. OK, I did not think about that one. But now that they find a nice positive correlation, how could one explain it? They argue that this has to do that women and children are not particularly useful when ploughing, as strength is required. The traditional task of weeding, that fell on women and children, is not necessary with ploughing. Thus, there is a preference for fewer children that is ingrained in the culture of these regions to this day.
Saturday, March 26, 2011
The unnecessary problems of the Euro
European leaders are currently struggling over a package to save the Euro, pouring large amounts of money into funds that should stabilize the fiscal situation in Greece, Portugal, Ireland and potentially other countries. It seems to me that this is a completely unnecessary problem, and all this grief could have easily been avoided with a simple change in policy.
Just look at what is happening in the United States. Several states are in serious financial difficulties and, as several times in the past, California is considering issuing IOUs, thereby essentially declaring it is insolvent. Is there any expectation that other states or the federal government will rush to California's aid because the dollar is threatened? Of course not, despite the fact that California is the largest state in the Union.
It should be the same for the Euro. None of the member countries can monetize its debt on its own, and the only reason that the Euro is threatened is that markets have an expectation that other countries will rush to help, thereby sending a message that monetary policy could be influenced by what is happening in those small countries. And why is this belief well anchored? Because European indeed rush to help (talk about a nice example of self-fulfilling expectations) and because of this silly concept that all national debt in Europe is fungible (talk about a nice example of the tragedy of the commons). Now of course it is a bit late to rectify those beliefs, but had it been clear no rescue package were in sight, those countries would probably not taken such a risky fiscal path in the first place (talk about a nice example of moral hazard). I guess that those silly policy decisions all boil down to European politics, once more (talk about a nice example where economists' advice has been ignored, and they will get blamed for it anyway).
Just look at what is happening in the United States. Several states are in serious financial difficulties and, as several times in the past, California is considering issuing IOUs, thereby essentially declaring it is insolvent. Is there any expectation that other states or the federal government will rush to California's aid because the dollar is threatened? Of course not, despite the fact that California is the largest state in the Union.
It should be the same for the Euro. None of the member countries can monetize its debt on its own, and the only reason that the Euro is threatened is that markets have an expectation that other countries will rush to help, thereby sending a message that monetary policy could be influenced by what is happening in those small countries. And why is this belief well anchored? Because European indeed rush to help (talk about a nice example of self-fulfilling expectations) and because of this silly concept that all national debt in Europe is fungible (talk about a nice example of the tragedy of the commons). Now of course it is a bit late to rectify those beliefs, but had it been clear no rescue package were in sight, those countries would probably not taken such a risky fiscal path in the first place (talk about a nice example of moral hazard). I guess that those silly policy decisions all boil down to European politics, once more (talk about a nice example where economists' advice has been ignored, and they will get blamed for it anyway).
Friday, March 25, 2011
What influences Fed presidents?
The European Central Bank is still a recent creation, so you can excuse its national governors for putting their country's interest first in the conduct of monetary policy. What about another federal central bank that is much older and whose governors territory does not necessarily coincide with political boundaries, the Federal Reserve System of the US?
Bernd Hayo and Matthias Neuenkirch have analyzed the speeches of Federal Reserve presidents over the span of twelve years and come to the conclusions that they equally represent the national and regional interests, except when it is not their turn to vote, when their region comes first. They find this by trying to fit a Taylor Rule to their positions, using a price index, a national and a regional unemployment rate. Unfortunately (and surprisingly), there are no regional price indexes in the US, which could have reinforced the regional focus of the presidents. Still, I am surprised how much they lobby for the general interest. After all, they are selected to represent their region.
Bernd Hayo and Matthias Neuenkirch have analyzed the speeches of Federal Reserve presidents over the span of twelve years and come to the conclusions that they equally represent the national and regional interests, except when it is not their turn to vote, when their region comes first. They find this by trying to fit a Taylor Rule to their positions, using a price index, a national and a regional unemployment rate. Unfortunately (and surprisingly), there are no regional price indexes in the US, which could have reinforced the regional focus of the presidents. Still, I am surprised how much they lobby for the general interest. After all, they are selected to represent their region.
Thursday, March 24, 2011
You want to restrict bankers' pay
There has been and there still is much outrage about the large bonus payments bankers get. What the public does not understand is that bonus pay is a very large part of total pay, and it is so to encourage bankers to perform really well. And they certainly put in the hours. For example, bonus pay has been criticized because there is most often no "malus," but given that base pay is relatively low, this should capture it. The main criticism is aimed at the disparity of these bonus payments with respect to the average pay of a worker. This is, however, not something that should be regulated at the level of bonus pay, but through redistribution with income taxes. In this regard, whether it is regular pay or bonus pay makes no difference. So, should then bonus pay in banking be left unregulated?
John Thanassoulis does not think so. He argues that as bank compete for top bankers and try to shift the risk on them, they end up paying them too much and all in bonuses. This is optimal for the bank as it lowers its costs right when things get critical. But as a consequence, the bank gets too much into risky activities, as competition for bankers drives bonuses up higher than socially optimal, especially if there is a contagion risk of default for other banks. So you want a regulator to limit bonuses, but in a flexible way, or the benefit of having bonuses in the first place gets eroded. Indeed, it is the top brass that sets the bank level risk, whereas other employees all the way down to secretaries (who also get bonuses) are less influential, even collectively, on the aggregate risk. Thus the idea is not to cap bonuses individually, but at the bank level as a proportion of the balance sheet (which is what matters in terms of default). The pay structure would then presumably be readjusted by the bank, relying more on bonuses where it matters the most. Taxing bonuses has no risk impact, though, except for reducing bankers' pay.
Another possibility could be the dynamic incentive accounts I mentioned before.
John Thanassoulis does not think so. He argues that as bank compete for top bankers and try to shift the risk on them, they end up paying them too much and all in bonuses. This is optimal for the bank as it lowers its costs right when things get critical. But as a consequence, the bank gets too much into risky activities, as competition for bankers drives bonuses up higher than socially optimal, especially if there is a contagion risk of default for other banks. So you want a regulator to limit bonuses, but in a flexible way, or the benefit of having bonuses in the first place gets eroded. Indeed, it is the top brass that sets the bank level risk, whereas other employees all the way down to secretaries (who also get bonuses) are less influential, even collectively, on the aggregate risk. Thus the idea is not to cap bonuses individually, but at the bank level as a proportion of the balance sheet (which is what matters in terms of default). The pay structure would then presumably be readjusted by the bank, relying more on bonuses where it matters the most. Taxing bonuses has no risk impact, though, except for reducing bankers' pay.
Another possibility could be the dynamic incentive accounts I mentioned before.
Wednesday, March 23, 2011
Modelling without theory
In Economics, we have adopted the scientific method much like other sciences. As we teach our students, it consists of the following steps
David Hendry just published a paper about the scientific method in Economics that appears to fly in the face of what I just described. Here is an attempt to summarize his stand, and I apologize for quoting quite liberally:
A part from the fact that this is really the blueprint for an automated data mining exercise that is not driven in any way to answering a particular policy question, this procedure not only disregards the scientific method, but also Occam's Razor and the Lucas Critique. What use is it to learn that the CPI follows a polynomial of degree five with three lags on exports of cabbage, the number of sunny days, 25 other variables and three structural breaks (not an actual example used by Hendry, but it could)? If you want to make some very short term forecasts, that may be accurate, and this method is abundantly used in the City or Wall Street by neural networks "experts." But when it comes to advising policymakers, you need to have some Economics, and by that I mean economic theory, to explain why economic agents behave in such a way and what an intervention would lead to.
The scientific method starts with the observation of the data. Hendry dismisses this with a slight of hand, stating that stylized facts are "an oxymoron in the non-constant world of economic data." What if there are constants in economic data? In fact there are plenty, and this is what theories are trying to explain. Has Hendry never observed something in his surrounding that he then tried to explain? Or does he really spend his days feeding linear equations into his computer to see what it can come up with with his database?
Such papers, especially by people who enjoy respect like Hendry does in the UK, deeply upset me. To top it off, there are 33 self-citations.
- Observe regularities in the data.
- Formulate a theory.
- Generate predictions from the theory (hypotheses).
- Test your theory (is it consistent with data?)
David Hendry just published a paper about the scientific method in Economics that appears to fly in the face of what I just described. Here is an attempt to summarize his stand, and I apologize for quoting quite liberally:
- Specify the object for modeling, usually based on a prior theoretical analysis in Economics. An example of such an object is y=f(z).
- Defining the target for modeling by the choice of the variables to analyze, y and z, again usually based on prior theory. This is about deriving the data-generating process of the variables of interest, or fitting an equation with some statistical procedure.
- Embed that target in a general unrestricted model (GUM), to attenuate the unrealistic assumptions that the initial theory is correct and complete. The idea is to add other variables, lags, dummies, shift variables and functional forms to improve the empirical accuracy of the initial model.
- Search for the simplest acceptable representation of the information in that GUM. Or, now that the model has become huge (and may contain more variables than data points), let us get rid of some of them without losing too much in accuracy.
- Rigorously evaluate the final selection: (a) by going outside the initial GUM in step three, using standard mis-specification tests for the ‘goodness’ of its specification; (b) applying tests not used during the selection process; and (c) by testing the underlying theory in terms of which of its features remained significant after selection.
A part from the fact that this is really the blueprint for an automated data mining exercise that is not driven in any way to answering a particular policy question, this procedure not only disregards the scientific method, but also Occam's Razor and the Lucas Critique. What use is it to learn that the CPI follows a polynomial of degree five with three lags on exports of cabbage, the number of sunny days, 25 other variables and three structural breaks (not an actual example used by Hendry, but it could)? If you want to make some very short term forecasts, that may be accurate, and this method is abundantly used in the City or Wall Street by neural networks "experts." But when it comes to advising policymakers, you need to have some Economics, and by that I mean economic theory, to explain why economic agents behave in such a way and what an intervention would lead to.
The scientific method starts with the observation of the data. Hendry dismisses this with a slight of hand, stating that stylized facts are "an oxymoron in the non-constant world of economic data." What if there are constants in economic data? In fact there are plenty, and this is what theories are trying to explain. Has Hendry never observed something in his surrounding that he then tried to explain? Or does he really spend his days feeding linear equations into his computer to see what it can come up with with his database?
Such papers, especially by people who enjoy respect like Hendry does in the UK, deeply upset me. To top it off, there are 33 self-citations.
Tuesday, March 22, 2011
The spaceship problem
Suppose you have to plan a very long term mission in space. It will last for many years, and you need to provide a group of people the means to live in a hermetic environment. You do not have access to Star Trek technologies like warp speed, replication and teleportation. Your population can reproduce, but life length and quality of life depends on resources and population density. How many people should be on such a mission? This is known as the spaceship problem. Of course, economists have something to say about this.
Pierre-André Jouvet and Grégory Ponthière are not going to solve the problem, there are too many biological and physical constraints, but they point out that the solution will yield solutions that contradict utilitarianism. They focus on the trade-off between the number of people and their life length. Indeed, longevity impacts population size and thus density. They assume that a social planner uses the sum of residents' utilities as a criterion and, unfortunately, that resources are unlimited, which makes the paper stray away from Economics.
What Jouvet and Ponthière really want to do it is compare different social welfare criteria in this environment. The Classical Utilitarian, for example, sums the utility of all individuals, the Average Utilitarian only the living ones. In a model without reproduction and a finite mission time, Classical Utilitarianism yields a small population living very long, while the second may want to have a large population that lives for a short time. Add reproduction to the mix and anything can happen depending on parameters values and initial population size. Make the mission life infinite, and the authors run into problems and need to define additional social welfare parameters. That is mainly due to the fact that there is no discounting, and infinitively lived economies and ill-defined.
What do I learn from this exercise? It is not very clear, except that social welfare criteria matter, adding utilities gives us a lot of trouble and that discounting is essential. But we knew that already, even when the spaceship is called Earth.
Pierre-André Jouvet and Grégory Ponthière are not going to solve the problem, there are too many biological and physical constraints, but they point out that the solution will yield solutions that contradict utilitarianism. They focus on the trade-off between the number of people and their life length. Indeed, longevity impacts population size and thus density. They assume that a social planner uses the sum of residents' utilities as a criterion and, unfortunately, that resources are unlimited, which makes the paper stray away from Economics.
What Jouvet and Ponthière really want to do it is compare different social welfare criteria in this environment. The Classical Utilitarian, for example, sums the utility of all individuals, the Average Utilitarian only the living ones. In a model without reproduction and a finite mission time, Classical Utilitarianism yields a small population living very long, while the second may want to have a large population that lives for a short time. Add reproduction to the mix and anything can happen depending on parameters values and initial population size. Make the mission life infinite, and the authors run into problems and need to define additional social welfare parameters. That is mainly due to the fact that there is no discounting, and infinitively lived economies and ill-defined.
What do I learn from this exercise? It is not very clear, except that social welfare criteria matter, adding utilities gives us a lot of trouble and that discounting is essential. But we knew that already, even when the spaceship is called Earth.
Monday, March 21, 2011
The impact of job search monitoring
Unemployment insurance is thought to be ripe with abuse, especially as job seekers do not appear to seek jobs that much. For example, time use surveys have established that they spend very little time on job search, the median being even zero minutes on any given day (see previous post on this). So it seems natural that you want to make sure the job seekers give sufficient effort to obtain benefits. But how effective is such monitoring?
Bart Cockx and Muriel Dejemeppe study this imposition of stronger monitoring on long term unemployed workers in 2004 in Belgium. For all practical purposes, there is no limit to the duration of unemployment insurance benefits there, so it seems baffling that only recently has there been some serious monitoring, and this only happens after eight months of unemployment. And then, it is only in the form of a stern letter threatening monitoring. Before this, monitoring was only targeted on those unemployed for more than 21 months...
This means that before 2004, the unemployed had essentially free rein. After that, there is a supposedly credible threat of monitoring after eight months. In Wallonia, the letter is followed up two months later with a counseling session. In Flanders, there is no systematic counseling That is probably what can be considered a clean natural experiment of a transition from no monitoring to some monitoring. What impact did it have? In Flanders, the transition to employment after eight months of unemployment increased by 28%. In Wallonia, this is 22%, the authors conjecture it is lower despite the more credible threat due to worse labor market conditions. But especially in Flanders, it appears the shorter duration implies that workers end up with worse jobs than before, both in terms of wage and duration of employment. And Walloon females are more likely to transition into sickness insurance, which has the same benefits as unemployment insurance.
Bart Cockx and Muriel Dejemeppe study this imposition of stronger monitoring on long term unemployed workers in 2004 in Belgium. For all practical purposes, there is no limit to the duration of unemployment insurance benefits there, so it seems baffling that only recently has there been some serious monitoring, and this only happens after eight months of unemployment. And then, it is only in the form of a stern letter threatening monitoring. Before this, monitoring was only targeted on those unemployed for more than 21 months...
This means that before 2004, the unemployed had essentially free rein. After that, there is a supposedly credible threat of monitoring after eight months. In Wallonia, the letter is followed up two months later with a counseling session. In Flanders, there is no systematic counseling That is probably what can be considered a clean natural experiment of a transition from no monitoring to some monitoring. What impact did it have? In Flanders, the transition to employment after eight months of unemployment increased by 28%. In Wallonia, this is 22%, the authors conjecture it is lower despite the more credible threat due to worse labor market conditions. But especially in Flanders, it appears the shorter duration implies that workers end up with worse jobs than before, both in terms of wage and duration of employment. And Walloon females are more likely to transition into sickness insurance, which has the same benefits as unemployment insurance.
Friday, March 18, 2011
Fertility and self-control
The Beckerian theory of fertility decisions in the family is based on the rationality and self-control of the involved parties, which is in stark contrast of the Malthusian theory of population growth, which relies on people breeding without control. As so often, the truth lies somewhere in between, as it is clear people guide their fertility outcomes but their is a substantial stochastic element to it.
Bertrand Wigniolle reinterprets Becker's theory by adding lack of self-control in the for of hyperbolic discounting, that is, discounted values fall rapidly in the near future and more slowly for the distant future. Wigniolle uses a three period model with parents valuing number and quality of children. In period 1, they choose the number of children and pay time costs for rearing them. In period 2, they choose their education, and support the associated costs. In period 3, they only enjoy their children. Hyperbolic discounting implies that every period they regret some of their past choices. The model yields very different results depending on parameter values. If parents have grounds to invest in the education of their children, then fertility is lower due to the lack of self-control. Call this a developed economy. If parents have no reason to invest in education (say because its return is low), then fertility gets higher with hyperbolic discounting. Call this a developing economy. And this is very sensitive to parameter values. A further reason to push for more schools and opportunities to use human capital in developing economies.
Bertrand Wigniolle reinterprets Becker's theory by adding lack of self-control in the for of hyperbolic discounting, that is, discounted values fall rapidly in the near future and more slowly for the distant future. Wigniolle uses a three period model with parents valuing number and quality of children. In period 1, they choose the number of children and pay time costs for rearing them. In period 2, they choose their education, and support the associated costs. In period 3, they only enjoy their children. Hyperbolic discounting implies that every period they regret some of their past choices. The model yields very different results depending on parameter values. If parents have grounds to invest in the education of their children, then fertility is lower due to the lack of self-control. Call this a developed economy. If parents have no reason to invest in education (say because its return is low), then fertility gets higher with hyperbolic discounting. Call this a developing economy. And this is very sensitive to parameter values. A further reason to push for more schools and opportunities to use human capital in developing economies.
Thursday, March 17, 2011
Rent seeking in divorce
Divorce is breaking a marriage, and courts rule on the compensation of the involved parties. In countries where no-fault divorce is allowed, how compensation is allocated is very much dependent on the outside option of each party, particularly when negotiations happen without the involvement of a judge, who is then just a threat point. For example, when there is only consensual divorce allowed, the partner not seeking divorce has all the bargaining power. But when no-fault unilateral divorce is allowed, the roles are completely reversed.
Sietse Bracke, Koen Schoors and Gerd Verschelden study how the introduction of unilateral divorce changes outcomes in Belgium, where consensual divorce was already permitted. In particular they look at self-sacrifice, for example how some household member may specialize in home production and thus jeopardize her labor market potential and bargaining position in case of divorce, especially when there is no-fault unilateral divorce. Indeed, one can view this specialization as an investment in future rents from marriage, and divorce annihilates those.
To analyze this, the authors collected survey data from divorces in four cities for a year. Using alimony as a signal of bargaining power, they find that alimonies are higher or more likely for long marriages, for no-fault divorces, and when there is significant self-sacrifice. That would all be consistent by theory, but unfortunately these results are somewhat tainted by the fact that the law gives judges very similar directives for handling divorces outcomes.
Sietse Bracke, Koen Schoors and Gerd Verschelden study how the introduction of unilateral divorce changes outcomes in Belgium, where consensual divorce was already permitted. In particular they look at self-sacrifice, for example how some household member may specialize in home production and thus jeopardize her labor market potential and bargaining position in case of divorce, especially when there is no-fault unilateral divorce. Indeed, one can view this specialization as an investment in future rents from marriage, and divorce annihilates those.
To analyze this, the authors collected survey data from divorces in four cities for a year. Using alimony as a signal of bargaining power, they find that alimonies are higher or more likely for long marriages, for no-fault divorces, and when there is significant self-sacrifice. That would all be consistent by theory, but unfortunately these results are somewhat tainted by the fact that the law gives judges very similar directives for handling divorces outcomes.
Wednesday, March 16, 2011
Properly weighting social welfare functions
When it comes to evaluating optimal policies, one has to define a social welfare criterion. That is problematic as soon as there is heterogeneity. A popular criterion is Pareto Optimality, but this is a weak criterion in the sense that it is not very restricting. Or one can look at a political equilibrium, but this ignores how much people care about various policy outcomes. Another way that makes microeconomic theoreticians cringe is to add up the utility of everyone. They cringe because utility functions are only defined up to a Paretian transformation, and thus not comparable across individuals. But sometimes you have to find a way, and it is commonly assumed that all individuals have the same utility function, but potentially different utilities. Yet, adding utilities up has the drawback to the optimal policy will always be about equalizing income and consumption across individuals, because poorer ones have a higher marginal utility of consumption. But not all policies should be primarily about redistribution. The typical solution to this problem is to apply so-called Negishi weights, which essentially freezes the initial distribution of income, and thus allows to concentrate on the purpose on the policy.
Alexis Anagnostopoulos, Eva Carceles-Poveda and Yair Tauman offer a different solution to this problem. While Negishi amounts to weigh each individual by the inverse of her marginal utility at the maximal outcome, this results relies on the existence of complete markets. Under incomplete markets, the set of weights may be different. To give credit to the precise formulation of the problem, I quote the authors here:
This is a very exciting paper that should lay the foundation for a better assessment of policies than the silly adding up of utilities that is typically done.
Alexis Anagnostopoulos, Eva Carceles-Poveda and Yair Tauman offer a different solution to this problem. While Negishi amounts to weigh each individual by the inverse of her marginal utility at the maximal outcome, this results relies on the existence of complete markets. Under incomplete markets, the set of weights may be different. To give credit to the precise formulation of the problem, I quote the authors here:
We first define for every set of individual weights and for every social welfare function the contribution of every individual to the total welfare through the individual’s initial endowments. We then provide an axiomatic approach to the notion of the per unit contribution of every good and every individual, where the contribution of an individual to the total welfare is the total contribution of his initial endowments. We then define a set of individual weights to be proper iff the weighted utilities of every individual from this allocation are proportional to the contribution of the individual to the total welfare as defined by this set of weights.
The axiomatic approach consists of four axioms that characterize an elegant family of contribution mechanisms. The first axiom asserts that the per unit contribution should be independent of the units of measurement of the goods. The second asserts that if two (or more) goods play the same role in the welfare function, they should have the same per unit contribution. The third axiom asserts that if the welfare function can be broken into different components, then the per unit contribution of a given good is the sum of the per unit contributions arising from the different components. The last axiom guarantees that the per unit contribution is a continuous mapping with respect to an appropriate norm.
It is shown that every contribution mechanism that satisfies these four axioms is uniquely determined by a non negative measure on the unit interval. The selection of a specific contribution mechanism (or equivalently the selection of a specific nonnegative measure on the unit interval) determines for a given economy and a given set of weights a proper constrained efficient allocation and a proper set of weights.
This is a very exciting paper that should lay the foundation for a better assessment of policies than the silly adding up of utilities that is typically done.
Tuesday, March 15, 2011
Health cults in ancient Greece
Ancient Greece is a fascinating period as this is the start of the rational and scientific study of the world and many scientific principles were laid down. The Greek philosophers where in particular the first to think seriously about the role of institutions, markets and the functioning of government. In terms of health and medicine, we have all learned about the first attempts to explore and rationalize the human body, using a secular and scientific approach that was unparalleled until much later in history.
Carl Hampus Lyttkens points out that there was also a counter-movement where health care was leaning much more on religion. He also remarks that this is not unlike what we experience now with alternative medicine that has many followers and is even part of state sponsored health care in some countries. Calling these health cults, Hampus Lyttkens claims they arise because people are afraid of the uncertainties of life and cling to anything to reassure themselves. Just think about how many people believe in life after death while there is no scientific evidence for it. And healing cults are often, now and then, the realm of those who cannot afford the services of the scientific healers.
Carl Hampus Lyttkens points out that there was also a counter-movement where health care was leaning much more on religion. He also remarks that this is not unlike what we experience now with alternative medicine that has many followers and is even part of state sponsored health care in some countries. Calling these health cults, Hampus Lyttkens claims they arise because people are afraid of the uncertainties of life and cling to anything to reassure themselves. Just think about how many people believe in life after death while there is no scientific evidence for it. And healing cults are often, now and then, the realm of those who cannot afford the services of the scientific healers.
Monday, March 14, 2011
Peer effects in education
The reason parents try get their children in good schools is not because of the better teachers, it is because of the better class mates. If they are among stable, studious and ambitious students, they are expected to perform better in expected terms. (Of course there is also the tactic to send your kid to an inner-city school, where she outperforms the others and lands easy acceptances in colleges, with scholarships, if all goes right). But how much do these peer effects matter?
Eleonora Patacchini, Edoardo Rainone and Yves Zenou study the social networks of children in the AddHealth longitudinal survey and find that it is the very last years of high school that matter, but those years matter a lot as their impact is very persistent. Specifically, they observe that if a child's friends have the equivalent of two more completed high school educations, that child will get 3.5 months more education. But it would be good to know whether this dominates any effect from neighborhoods, which is taken into account but not reported.
Eleonora Patacchini, Edoardo Rainone and Yves Zenou study the social networks of children in the AddHealth longitudinal survey and find that it is the very last years of high school that matter, but those years matter a lot as their impact is very persistent. Specifically, they observe that if a child's friends have the equivalent of two more completed high school educations, that child will get 3.5 months more education. But it would be good to know whether this dominates any effect from neighborhoods, which is taken into account but not reported.
Friday, March 11, 2011
On the decline of the US manufacturing wage
It is always interesting to see how real wages evolve, as they allow to understand how much a worker can buy from his income. Usually, this is done by dividing the nominal wage by a price index, usually the commodities said worker would typically buy. The results may vary considerably, as a different price index is needed for different workers, and the basket of goods may also vary over time. The latter is particularly important when the sample period is long. It also depends whether you look a hourly, weekly or annual income, and how benefits are included.
John Pencavel reviews a centuries old literature on the topic that came to the conclusion that except for some periods on stagnation, real wages generally were upward bound. He then comes up with his own indexing procedure, and finds that real wages in the US manufacturing sector have declined by 40% since 1960. Wow, this seems to be a real big result, and this requires understanding how this was computed. Indeed, Pencavel does not measure the real wage in the conventional way, but rather the ratio of what workers get to what they could get if the firm made no profit. This does not necessarily mean that the buying power of the worker has decreased by 40%. but rather that a smaller share of firm income goes to labor. With the increased mechanization of manufacturing, this evolution should not surprise many people. But this is not necessarily a 40% fall in real wages as advertised in the paper's abstract.
John Pencavel reviews a centuries old literature on the topic that came to the conclusion that except for some periods on stagnation, real wages generally were upward bound. He then comes up with his own indexing procedure, and finds that real wages in the US manufacturing sector have declined by 40% since 1960. Wow, this seems to be a real big result, and this requires understanding how this was computed. Indeed, Pencavel does not measure the real wage in the conventional way, but rather the ratio of what workers get to what they could get if the firm made no profit. This does not necessarily mean that the buying power of the worker has decreased by 40%. but rather that a smaller share of firm income goes to labor. With the increased mechanization of manufacturing, this evolution should not surprise many people. But this is not necessarily a 40% fall in real wages as advertised in the paper's abstract.
Thursday, March 10, 2011
Using the WTO to overcome a prisoner's dilemma
It looks like the threat of competitive increases in trade tariffs has vanished, at least for the moment, as economies are getting back in shape. This episode highlighted how tariffs are part of a prisoner's dilemma: increasing tariffs is good for you, at least in the short term, as it gives more market share to local firms and/or more revenue to the state. But it hurts the foreign country, and if everybody does it, joint welfare is reduced because of the lower gains from exchange, the misallocation of productive resources and the loss of competitiveness of protected industries. It is precisely because of this prisoner's dilemma that the GATT (General Agreement on Tariffs and Trade) and then the WTO (World Trade Organization) were put in place.
Renee Bowen takes this reasoning further by looking a at a multilateral prisoner's dilemma, and interestingly the optimal institution that emerges looks very much like WTO's dispute settlement mechanism, in that countries cannot retaliate while a dispute is being settled. The key is that once a large enough number of countries participate in the WTO, the threat of sanctions is sufficient to obtain settlement and nobody is compelled to jump the gun with retaliations.
Renee Bowen takes this reasoning further by looking a at a multilateral prisoner's dilemma, and interestingly the optimal institution that emerges looks very much like WTO's dispute settlement mechanism, in that countries cannot retaliate while a dispute is being settled. The key is that once a large enough number of countries participate in the WTO, the threat of sanctions is sufficient to obtain settlement and nobody is compelled to jump the gun with retaliations.
Wednesday, March 9, 2011
Designing matching mechanisms in medieval times
There is a remarkable diversity in economic institutions across the world, and it is always interesting to understand why and how they developed differently. Think for example of the large variety of ways tariffs and taxation are implemented. It is of special interest to do so for economies a few centuries ago because institutions evolved then much more isolated from each other and with little guidance from pure theory.
Lars Boerner and Daniel Quint study how brokerage rules have been established in 42 European merchant towns between the 13th and 17th century. Brokers are important because they facilitate the market clearing process, but they may also abuse their power and need to be regulated. Sometimes these rules are seller-friendly, and sometimes they are buyer-friendly. Seller-friendly rules attract foreign merchants by giving them more surplus. It is not obvious whether this is welfare improving for the local population. The paper tries to show what made the towns go one way or the other. Part of the answer come from town effects, but what type of good is exchanged matters a lot as well. For example, food market usually have buyer friendly rules, as this benefits the most the local population which is often also the final consumer. For textile and leather markets, rules are more seller friendly, probably because the buyer is more likely to be from outside town. There is much more in the paper.
Lars Boerner and Daniel Quint study how brokerage rules have been established in 42 European merchant towns between the 13th and 17th century. Brokers are important because they facilitate the market clearing process, but they may also abuse their power and need to be regulated. Sometimes these rules are seller-friendly, and sometimes they are buyer-friendly. Seller-friendly rules attract foreign merchants by giving them more surplus. It is not obvious whether this is welfare improving for the local population. The paper tries to show what made the towns go one way or the other. Part of the answer come from town effects, but what type of good is exchanged matters a lot as well. For example, food market usually have buyer friendly rules, as this benefits the most the local population which is often also the final consumer. For textile and leather markets, rules are more seller friendly, probably because the buyer is more likely to be from outside town. There is much more in the paper.
Tuesday, March 8, 2011
The smart children of vengeance
As someone who has been raised in a non-violent environment, I am often surprised how people in some circles easily resort to vengeance and violence while a conciliatory attitude could have resolved "issues" quickly and efficiently. There is certainly a good deal of learned behavior that determines whether you are of a conflicting or conciliatory type, and this learning comes from example, in the family, among peers and in society. Society is important (say, compare Scandinavia to the Balkans) but there are also striking differences within societies. That is where parents may come in.
Ruby Henry studies how the use of retaliation is transmitted to children, first using a model of education effort by parents, and then using UK National Childhood Development Survey. The theoretical prediction is confirmed that high-cognitive parents are better able to transmit their values and override the peer culture, as long as the parents are retaliators. Indeed, if a child is told to retaliate and meet a forgiver, he wins and his values are reinforced. If he is a forgiver and meets a retaliator, he looses and is upset by the teachings of his parents. It the long run, this means that humankind will settle on a retaliating culture. But I do not think this is what we observe. In fact, there are less wars, people abide more to contracts and, I think, respect more the rule of law over time. Correct me if I am wrong.
Ruby Henry studies how the use of retaliation is transmitted to children, first using a model of education effort by parents, and then using UK National Childhood Development Survey. The theoretical prediction is confirmed that high-cognitive parents are better able to transmit their values and override the peer culture, as long as the parents are retaliators. Indeed, if a child is told to retaliate and meet a forgiver, he wins and his values are reinforced. If he is a forgiver and meets a retaliator, he looses and is upset by the teachings of his parents. It the long run, this means that humankind will settle on a retaliating culture. But I do not think this is what we observe. In fact, there are less wars, people abide more to contracts and, I think, respect more the rule of law over time. Correct me if I am wrong.
Labels:
Cultural Economics,
game theory,
United Kingdom
Monday, March 7, 2011
Another French experiment with work hours going bad
The French have a special knack in messing with labor markets. The last spectacular failure was the law that limited almost everyone's workweek to 35 hours in the hope this would spread the total hours to more people, lead to more employment and solve a chronic unemployment problem. Well, it did not and lead to loss of productivity and ridiculed controls. And I doubt many economists were surprised. With the election of Sarkozy, this law was quickly scraped and an equally ridiculous law from the other end of the spectrum was introduced.
Pierre Cahuc and Stéphane Carcillo discuss the French policy of making overtime work tax exempt. One can really question what Sarkozy had in mind with this policy, as the adverse consequences are all too obvious. First what is overtime is easily manipulated, and suddenly many regular hours became overtime hours. Second, if the intend was to increase the total hours of work, it was bound to fail if most of the overtime is coming from workalcoholics who would work no matter what the wage is. This is why you need to tax them instead of subsidizing them. And Cahuc and Carcillo find that indeed total hours hardly changed. So what all this amounted to is a generous lump-sum subsidy to highly-skilled workalcoholics. Great.
Pierre Cahuc and Stéphane Carcillo discuss the French policy of making overtime work tax exempt. One can really question what Sarkozy had in mind with this policy, as the adverse consequences are all too obvious. First what is overtime is easily manipulated, and suddenly many regular hours became overtime hours. Second, if the intend was to increase the total hours of work, it was bound to fail if most of the overtime is coming from workalcoholics who would work no matter what the wage is. This is why you need to tax them instead of subsidizing them. And Cahuc and Carcillo find that indeed total hours hardly changed. So what all this amounted to is a generous lump-sum subsidy to highly-skilled workalcoholics. Great.
Friday, March 4, 2011
Seasonal adjustment is difficult
As undergraduates, we are taught to make sure the macroeconomic data we are dealing with is seasonally adjusted. We are explained that statistical offices remove the seasonal factors is a way that is close to regressing the data on seasonal dummies and taking moving averages. If you really look into this, as so often, it turns out things are much more complex than that, and subtleties matter.
Stephen Pollock and Emi Mise do a technical review of the various methods and look at some alternatives. Broadly speaking, there are three strands of techniques. The first is based on ARIMA, the second removes seasonal frequencies found in a periodogram, and the third relies on clear distinctions between fundamental and seasonal components in spectral analysis. The difficulties are compounded by the fact that data usually has a trend, which may not be loglinear, and data thus requires pre- and post-treating. And as Pollock and Mise show, each of these methods matter, even for dating turning points. I can imagine this can become even more important when one throws data into a regression, especially if the series have been detrended in different ways. And it is rare to see statistical offices declare what method was used for that.
Stephen Pollock and Emi Mise do a technical review of the various methods and look at some alternatives. Broadly speaking, there are three strands of techniques. The first is based on ARIMA, the second removes seasonal frequencies found in a periodogram, and the third relies on clear distinctions between fundamental and seasonal components in spectral analysis. The difficulties are compounded by the fact that data usually has a trend, which may not be loglinear, and data thus requires pre- and post-treating. And as Pollock and Mise show, each of these methods matter, even for dating turning points. I can imagine this can become even more important when one throws data into a regression, especially if the series have been detrended in different ways. And it is rare to see statistical offices declare what method was used for that.
Thursday, March 3, 2011
Does it make sense to open new universities?
That question may not make sense in the US or the UK, or some other European countries that face very serious public budget constraints. It also does not make much sense given the peak in student attendance in many OECD countries. But where student numbers will keep increasing, it is a good question whether one should increase the size of universities or their numbers.
Berardino Cesi and Dimitri Paolini consider the question from the angle of the students' mobility constraints. Suppose students differ by ability and by location, and mobility is costly. A monopolistic university will only attract the ablest. Adding a new, local university is then welfare improving. Suppose now the mobility costs are rather low. Adding a second university is now welfare decreasing because of a peer effect. Indeed, high ability students now get pooled with low ability ones, and they suffer through adverse peer effects. And given that some students simply do not belong in a university, we are better off with fewer universities and fewer college students.
Berardino Cesi and Dimitri Paolini consider the question from the angle of the students' mobility constraints. Suppose students differ by ability and by location, and mobility is costly. A monopolistic university will only attract the ablest. Adding a new, local university is then welfare improving. Suppose now the mobility costs are rather low. Adding a second university is now welfare decreasing because of a peer effect. Indeed, high ability students now get pooled with low ability ones, and they suffer through adverse peer effects. And given that some students simply do not belong in a university, we are better off with fewer universities and fewer college students.
Wednesday, March 2, 2011
Latin American home owners are happier, unlike US ones
There is a myth saying that owning a home makes people happier and leads them to contribute more to their community. In an earlier report, I pointed out that this idea is a myth for the US homeowner. What about elsewhere?
Inder Ruprah finds that Latin American house owners are indeed happier. This is obtained from a survey where people declare how happy they are, the reliance of which many researchers have called into question. But happiness studies slowly get more acceptance, especially when results are clear cut, like here. Of course, homeownership could be correlated with some unobservables that matter a lot for happiness, for example economic and social standing. There is a variable that could capture this in the regression, "Interviewer assessment of economic situation of the household," but I have no idea how reliable it is.
PS: The pdf file is 7.3MB large. It took me five attempts to download it. There are a few very simple graphs and histograms in the paper, in other words no reason to have such a large file, but for unnecessary front and back covers. But if the IADB is willing to waste bandwidth that way, especially as its target audience in Latin America may not necessarily enjoy fast internet.
Inder Ruprah finds that Latin American house owners are indeed happier. This is obtained from a survey where people declare how happy they are, the reliance of which many researchers have called into question. But happiness studies slowly get more acceptance, especially when results are clear cut, like here. Of course, homeownership could be correlated with some unobservables that matter a lot for happiness, for example economic and social standing. There is a variable that could capture this in the regression, "Interviewer assessment of economic situation of the household," but I have no idea how reliable it is.
PS: The pdf file is 7.3MB large. It took me five attempts to download it. There are a few very simple graphs and histograms in the paper, in other words no reason to have such a large file, but for unnecessary front and back covers. But if the IADB is willing to waste bandwidth that way, especially as its target audience in Latin America may not necessarily enjoy fast internet.
Tuesday, March 1, 2011
Affirmative action and stereotypes
The ghetto culture is a very strong absorbing point. Once you are there, it is very difficult to get out, and your children will also have a very hard time. The problem with the ghetto culture is that is harbors values that a very different from mainstream culture, in particular regarding work habits. Those values are instilled by your environment (family, neighbors, community) and when you grow up around peers with poor work habits, it is difficult to acquire better habits.
Maria Sáez-Martí and Yves Zenou make this point and add that even when parents are forward-looking and care about their offspring, they will not make the investment to teach their children good habits if employers take the cultural environment of a potential employee as a signal of work habits. They discriminate, and because they discriminate they turn out to be correct. That is a vicious circle that is difficult to beat, but initiatives like affirmative action can overcome this. The condition is that quotas be high enough.
Affirmative action can be implemented in two ways, imposing quotas on good jobs, and imposing quotas in the majority group (putting some of the ghetto in the mainstream group of applicants). Low quotas are also improving work habits in the ghetto in the second case, but are detrimental in the first case. This is because the advantage of better work habits is absent, wages of good workers are lower and parents put less effort in educating their kids. With the second case, wages of good workers remain high, and parents will help their kids who have then a chance to prove themselves. An alternative policy, integration is only beneficial to the ghetto, and obviously the others will resist integration because it hurts their work habit.
Maria Sáez-Martí and Yves Zenou make this point and add that even when parents are forward-looking and care about their offspring, they will not make the investment to teach their children good habits if employers take the cultural environment of a potential employee as a signal of work habits. They discriminate, and because they discriminate they turn out to be correct. That is a vicious circle that is difficult to beat, but initiatives like affirmative action can overcome this. The condition is that quotas be high enough.
Affirmative action can be implemented in two ways, imposing quotas on good jobs, and imposing quotas in the majority group (putting some of the ghetto in the mainstream group of applicants). Low quotas are also improving work habits in the ghetto in the second case, but are detrimental in the first case. This is because the advantage of better work habits is absent, wages of good workers are lower and parents put less effort in educating their kids. With the second case, wages of good workers remain high, and parents will help their kids who have then a chance to prove themselves. An alternative policy, integration is only beneficial to the ghetto, and obviously the others will resist integration because it hurts their work habit.
Subscribe to:
Posts (Atom)