Showing posts with label fundamentals. Show all posts
Showing posts with label fundamentals. Show all posts

Thursday, January 30, 2014

Capitalism's rapture

Economics is based on a small set of very powerful axioms that a the foundation of utility theory, general equilibrium theory, and more. Experiments have contradicted every one of these axioms one way or the other. We still keep them because they seem to apply most of the time, and the occasional violation does not invalidate the general picture. But it is good to keep an eye on their validity and think about alternative scenarios, especially if they bring us better theories.

Egmont Kakarot-Handtke decides to start afresh with a completely new set of axioms. And instead of choosing some that have some subjectivity, he takes some that are as objective as any axiom could be: four accounting identities and definitions. Yes, you read that right. 1) definition of national product (income approach); 2) a linear production function in labor; 3) definition of nominal consumption as the product of real consumption times a price; and 4) the values of all economic variables this year are last year's variables times one plus their respective growth rate plus an independent and random component for each. Easy. From this Kakarot-Handtke builds an elaborate theory that demonstrates with a mathematical proof (it is in the title, so it must be true) that capitalism is on the verge of collapsing. To me it looks more like his readers could collapse from hyperventilating over this amazing pile of rubbish.

This bizarre scientist has trademarked his models. I am afraid I cannot go into more details about this work without violating some law (Trademark law? Law of sanity?). So I leave it at this.

Wednesday, January 29, 2014

The best justification for IS-LM?

IS-LM models have always left me puzzled. To me, they are the equivalent to a reduced-form regression with omitted variables and endogeneity issues. Through a lot of hand-waving, you can have any model fit the data. But what I find the most bizarre is this strange obsession with justifying the IS-LM models from micro-foundations. Somehow, IS-LM is taken as an ultimate truth, and one needs to reverse-engineer it to find what can explain it. The ultimate truth is the data, not the model.

Pascal Michaillat and Emmanuel Saez bring us yet another paper that tries to explain the IS-LM model from some set of micro-foundations. The main ones this time are money-in-the-utility-function and wealth-in-the-utility function (and matching frictions on the labor market, which are not objectionable). I find it very hard to believe that by now anybody would consider this a valid starting point. Rarely does anybody enjoy simply having money, the reason why people like having money is that they can buy things with it, things that are already in the utility function, or that money facilitates transactions, something that you can easily model. The same applies to wealth. True, some people may be obsessed with getting richer just for being rich, but for the remainder of the citizen, they like wealth for what it brings in future consumption for themselves and their heirs, and for the security it brings in the face of future shocks. All this easily modelled in standard models.

It seems to me this paper is a serious step back. Macroeconomists try to understand why there are frictions on markets, so that one better determine the impact of policy on such markets. Simply sweeping everything in the utility function, where in addition one has a lot of freedom in choosing its properties, does not help us in any way. And it is wrong, because it is again some sort of reduced form that is not immune to policy changes. Suppose the economic environment becomes more uncertain. Are we now supposed to say that suddenly households like wealth more? They could also like wealth more because of changes in estate taxation or because of longer lifetimes, and these imply very different policy responses in better flushed-out models.

I just do not get it. Maybe some IS-LM fanboys can enlighten me.

Friday, January 17, 2014

Are NBA coaches behavioral or neoclassical?

Snnk cost do not matter once spent. Yet, we just cannot help thinking that if we already paid so much for something, we should rather use it, even if it is inferior to something less expensive. With this reasoning, we deviate from neoclassical theory into behaviorial theory. Such attitudes are not well documented, and it is not quite evident how one would put together a dataset to study attitudes towards sunk costs.

Daniel Leeds, Michael Leeds and Akira Motomura found a way, and it is in front of everyone. Professional sports teams sometimes invest or commit considerable resources to recruit players, and a substantial amount can be considered sunk, as it is in the form of a signing bonus, guaranteed pay, or by using an early draft pick for new players. A neoclassical theorist would say that this sunk cost only allows the coach to expand his decision set, but who actually plays on the team should only depend on the players' current performance. This study shows that at least NBA coaches do follow this neoclassical thinking and are not more likely to let under-performing young player stay on the team if they were drafted in the early rounds. Indeed, the data focuses on players in the first five NBA seasons when they all have a uniform contract, thus only draft order should matter. However, there could have been a perfectly neoclassical justification for a bias on the part of the coaches: some players were drafter early because they have potential, and that potential is going to develop with playing time. If there is a puzzle it is thus rather why early draftees get so little playing time.

Monday, December 23, 2013

Soviet general equilibrium theory

When we think about a social planner that maximizes welfare by assigning optimal allocations without an explicit price system, we are really describing a Soviet economy. History has shown that this utopia does not quite work out for a variety of reasons. Yet, Soviet economies were following this doctrine and their governments must have acted on some principles that must have come from somewhere: what should one allocate where, how should allocations change according to changes in exogenous factors, etc. Russia actually has a rich history of economic theoreticians who have worked out models to guide the policy makers, who liked to think themselves as technocrats. These theoreticians were mostly mathematicians working on various optimization techniques.

Ivan Boldyrev and Olessia Kirtchik describe the life of Victor Polterovich, who expanded Walrasian theory to non-market economies in the 1970s and was the only active Soviet economist with visibility in the West during that period: he has an Econometrica in 1983 and another one in 1993, and a few articles in the Journal of Mathematical Economics in between (see his page on IDEAS) and is a fellow of the Econometric Society. While Polterovich started as many others his academic career of Marxist planning theories, his move to general equilibrium theory may seem puzzling. Indeed, the welfare theorems have often been touted as a victory for the market economy, and Polterovich would certainly have been ill-advised to promote a market economy.

The paper is largely based on interviews of Polterovich that reveal interesting anecdotes, such as the unique history of his first Econometrica and how some of his most important results never got translated. The other Soviet economists did not go through the trouble of integrating with the international research community, and I am sure their are still interesting results that are ignored by the wider general equilibrium theory community. Polterovich came to general equilibrium theory by realizing that one needs at least as many instruments as objectives to manage optimally an economy. That did not seem feasible to him, hence his interest in decentralization. In his early models, agents interact, possibly forming coalitions. Keep in mind that to Soviets, agents were not individuals but political entities or firms. Later, price constructs are introduced, and they are helpful in understanding coordination among agents.

Thursday, December 12, 2013

To discount or not to discount?

In Economics, it is standard practice to discount future periods and generations. This is done throughout economics fields, even in the valuation of future benefits from nature, despite objections from biologists. Besides, we would not know how to solve our intertemporal models without discounting, unless one assumes a finite number of generations.

But this was not always so. As Pedro Garcia Duarte points out, Cambridge (UK) in the 1930s was lobbying against discounting. Surprisingly, Frank Ramsey (of the Ramsey model) was part of this faction, following his mentor Pigou. Their reasoning is purely ethical: future generations should be valued the same as the current one. But Ramsey pioneered an intertemporal model with infinite horizon, how did he solve it, you might say. Here is the trick. He assumed there is a finite maximum utility and a finite maximum production, called bliss, and minimized the deviation from it. A cheap trick, as this is essentially looking at infinity minus infinity. Garcia Duarte also explains the first intertemporal models and how discounting was either ignored or not viewed as a technical necessity. It is only in the mid-thirties that arguments about risk and impatience start appearing, and in the 1960s that work on the neo-classical growth model established discounting as an essential ingredient of any intertemporal theory.

Wednesday, December 11, 2013

Meta-analysis of the elasticity of intertemporal substitution

The elasticity of intertemporal substitution is one of the most estimated parameters in Economics. Why is it estimated over and over again? Because some results are positive, some are negative and some are zero. To have a clearer idea of what its true value is, we have to keep estimating it. However, the econometricians also need to get their results published, and the publishing tournament has not only an impact on which results get published but also on which ones the econometricians submit for publication.

Tomáš Havránek performs a meta-analysis of estimates of the elasticity of intertemporal substitution. That is, he gathers 169 studies and looks at their 2735 estimates. He finds significant under-reporting of results close to zero or negative, because of this publication bias. While the published mean is 0.5, the true mean should somewhere at 0.3 to 0.4. Negative results make little sense, but they can happen with some draw of the data. If editors and referees systematically discard such results, and positive ones, no matter how large they are, get a pass, we have a bias. But given the distribution of published ones, and knowing this bias, one can infer the full distribution of estimates, and hence Havránek's new estimates.

Tuesday, November 12, 2013

The price of long-run risk

In dynamic stochastic models, standard utility function specifications imply that the curvature of this function determines directly both the risk aversion and the elasticity of intertemporal substitution. When calibrating this, modelers have a tendency to be waving hands a bit too much, as they focus more on one than the other. In addition, their calibration seems to be immune to changes in data frequency. Those who are careful about this use Epstein-Zin preferences which disentangle risk aversion and the elasticity of intertemporal substitution. They think they have done all they could to address a proper calibration.

Well, not quite. Larry Epstein, Emmanuel Farhi and Tomasz Strzalecki show there is a third dimension in play, the temporal resolution of long-run risk. Indeed, the interaction of risk aversion and elasticity determines whether economic agents prefer early or late resolution of risk. This matters. Indeed, long-run risk is priced by markets differently than short-term risk, typically higher. Indeed, people are willing to pay to know uncertain outcomes earlier. But we do not know how much so far. An opportunity for additional research.

Wednesday, November 6, 2013

Are wages posted or bargained?

Modeling the labor market, we tend to postulate that wages are either posted by employers or negotiated, typically by Nash bargaining. This is especially true of search and matching models, which often study business cycles. Results depend to some degree on this assumption, thus it should be a good idea to check against the empirical evidence how wages are determined in the matching process.

Hanna Brenzel, Hermann Gartner and Claus Schnabel use employer data from Germany and find that it is a mixed bag. But how wages are set in not random. To quote from their abstract:
Wage posting dominates in the public sector, in larger firms, in firms covered by collective agreements, and in part-time and fixed-term contracts. Job-seekers who are unemployed, out of the labor force or just finished their apprenticeship are also less likely to get a chance of negotiating. Wage bargaining is more likely for more-educated applicants and in jobs with special requirements as well as in tight regional labor markets.
This implies in particular that the mix may change over the business cycle (as labor-market tightness changes), and that models that assume that one must be unemployed to apply for jobs and then get Nash bargaining are inconsistent with the data, at least in Germany.

Thursday, September 26, 2013

The Lucas Critique, DSGE models and the Phillips Curve

The Lucas Critique in 1976 has been a major motivation behind the building of RBC models, the folow-up DSGE models as well as the structural estimation of these models. The idea was that estimating reduced-form elasticities was not immune to policy variations, and those elasticities were being estimated to determine the impact of policy in the first place. The resulting bias reduced the trust in the Philipps curve. The structural models, however, had and still have at their core supposedly invariant parameters that describe some fundamentals of the economy. But it turns out that some of those are not invariant over time. I recently discussed the case of the labor income share (here). And misspecification can be problematic in the estimation of such structural models, with possibly important consequences.

Samuel Hurtado tries to sort that out by including parameter shifts in the estimation of a standard DSGE model, but misspecifying it in such a way that it ignores this shift. Using data form the 1970s, he then shows that the policy responses from his model look surprisingly close to those of a reduced-form Philipps curve. In other words, it seems the DSGE model without parameter shift is just as misspecified as the old Philipps curve. What this means is that one has to either include ad hoc parameter shifts or that one needs to go even deeper in the fundamentals to understand why and how these parameters shifts are occurring. The latter gives an even stronger meaning to the Lucas Critique.

Friday, September 20, 2013

Empirics under uncertain beliefs are difficult

The typically way we model uncertainty is by assuming economic agents know the stochastic process they are facing, and we call this uncertainty. That is wrong. This should be called risk as probabilities are known. Uncertainty is when those probabilities are unknown. That does not mean the agent is not rational, it is simply that the information set is smaller than what we typically assume.

Nabil Al-Najjar and Jonathan Weinstein point out that the uncertain agent trying to smooth consumption may look like he is excessively precautionary to someone thinking he has known probabilities. They frame it within a Bayesian framework, where beliefs including subjective probabilities are updated with incoming information. This makes it very difficult to do any empirical work, including measuring time preference or risk aversion.

I am surprised, though, the Al-Najjar and Weinstein misunderstand rational expectations. They claim an uncertain agent does not have rational expectations if beliefs over probabilities do not coincide with observed frequencies. This does not need to be if the econometrician has information the agent did not have at the time of the decision. If the agent uses all the available information, then it is still rational expectations. There may be so little that he cannot determine probabilities precisely, unlike perfect foresight on probabilities, like Al-Najjar and Weinstein seem to imply. In any case, this is more about semantics than results.

Tuesday, September 3, 2013

Superstitions and markets

The game of heads and tails with a coin toss is universally recognized as a game where the odds of each outcome are exactly 50%. In monetary terms and in expectation, nothing can be gained from this gamble, and in utility terms (again in expectation) one can only lose. Yet, people keep playing it for gain.

Silvia Bou, Jordi Brandts, Magda Cayón and Pablo Guillén devise a laboratory experiment where after an initial phase of five coin toss guesses, some students are asked to bet who will get the most guesses right in a second round of tosses. The subtlety of the experiment is that by default, the students are assigned the worst guesser of the first phase, and switching to another one is expensive. Yet almost all switched. This means that they were thinking that a lucky streak of right guesses in the first phase would continue in the second. And these were finance students, who should really know better.

Wednesday, August 28, 2013

Are there biases from monetary rewards in experimental economics?

It has become very fashionable to run economic experiments in the laboratory. Typically, undergraduate students are lured into the lab with some monetary rewards. A longstanding question has been whether this leads to a selection bias that renders the experiment results impossible to generalize. One paper I previously discussed (here) finds no bias within students, a second (here) worries that students in developed economies are not representative at all.

Johannes Abeler and Daniele Nosenzo add another bit of evidence. They invited students to an experiment, either by offering money or not, and either by appealing to the usefulness of research or not. First, the authors observe that appealing to money is much more successful than appealing to research, it triples the number of respondents. Thus, given that participants care more about money, we may think that they would have different characteristics compared to the others. That turns out not to be the case. Thus, no bias from monetary rewards, at least within the student population.

Wednesday, August 14, 2013

Strategic self-ignorance

There are times were we kind of feel we have done something stupid and do not want to know the result. For example, the grade of a test or how a recently bought stock is faring. Such situations are linked to regret aversion, where you consciously try to block available information after a decision has been taken. What about blocking readily available information before you take a decision?

Linda Thunström, Jonas Nordström, Jason F. Shogren, Mariah Ehmke and Klaas van 't Veld relate to the case of temptation, where you consciously block out information about the consequences of your action. Specifically, think about a delicious but calorie-laden meal. You kind of know it is bad for your health, but you decide not to look at the calorie count, although it is available. And that is what they had people do in a experiment where they invited people for lunch with two option: a low and an high calorie meal. It was, however, not obvious which one was high calorie, and participants could look it up. 58% chose not to and ate significantly more calories. How do the self-ignorant differ from the control group? It should not surprise anyone that they smoke more, have lower incomes, know less about nutrition and are more impatient. More interesting is that they are over-represented among males, educated and older people. Having a higher body-mass index leads to less self-ignorance. I wonder whether some of those results are endogenous to the setup of the experiment, where all those questions about nutrition are asked, which may lead people to become more conscious about their weight, especially less educated ones.

PS: too bad the equations in the paper are unreadable. Never use the Word equation editor...

Friday, June 28, 2013

Raising the bar of falsifiability in Economics

Economists do not dismiss theories easily. Although Popper taught us that once a falsifiable theory is reject by the data we should move on to better theories, it takes a lot of rejections for economists to move on. This may have two reasons: first, we all know that there can be serious issues with the data as we almost never have clean experiments to draw from. We are thus more tolerant for theories. Second, we tend to think that if a theory is rejected, we need to also propose a new one that is consistent with the data. That is quite a challenge.

Ronen Gradwohl and Eran Shmaya build on this second argument to amend the falsifiability criterion of Popper by adding a new one: that each rejection by the data be accompanied by a short proof on the inconsistency. If I understand this right, it would not be sufficient to show that the theory predicts, say, a positive relation between two variables, and the data finds a negative one, one also needs a convincing sketch of a proof that would convince a court that the data is indeed identifying the right relation and that it is relevant for the the theory. And this needs to be short because courts (or the scientific community) are busy. It seems we are doing it all wrong in Economics, as our arguments are excessively long, and getting longer. This is at least in part due to the fact that we require not just a short proof, but an extensive, complete one, and then we are still not convinced. Are we overdoing it? Possibly, at least the length and complexity of papers in Economics are becoming too much.

Monday, June 24, 2013

homo socialis

Everyone is familiar with homo œconomicus, the greedy economic agent that brings an economy to its most efficient allocation under perfect circumstances. But circumstances are less than perfect (externalities, imperfect competition, lack of commitment, asymmetric information, etc.) and Adam Smith's invisible hand needs a little help from some authority. Through regulation, taxation, subsidies and punishment, that authority can try to get closer to the first-best allocation, but at a cost.

According to Dirk Helbing, this cost is now overwhelming, because in current societies top-down management of an economy is not computable anymore. One should rather find a bottom-up approach, following the craze about Web2.0 and social media. Thus enters homo socialis, an economic agent who is very aware of all the ills of unfettered markets. If this sounds like one of those revolutionary solutions that would end world hunger, it is. It even comes with a new type of money, a must for these types of exercises.

So, how does this work? Homo socialis is an economic agent with other-regarding preferences. He needs institutions that allow him to express such preferences instead for reverting to the greedy homo œconomicus. Hence the institution of "qualified money" that rewards good behavior by this friendly and altruistic market participant by giving him "reputation." But if he is that altruistic, why does he needs such rewards? That is not clear. And who gives them? Is there any budget constraint here? It would really help to formalize a little bit all the author's ideas, but it is quite confusing. For example, the value of qualified money depends on its history. In other words, every single banknote may have a different value, depending on the context in which it was used. How is that simplifying the problem of complexity?

Helbling gives as as example the management of traffic lights in a city, a rather bizarre example. In the homo œconomicus scenario, an authority sets traffic light patterns and does not adapt them when lines become too long somewhere. In the homo socialis, this adaptation happens, presumably from a feedback coming from car drivers. Why the restriction in the first scenario? In fact cities do have feedback rules in place (notice the cameras along the roads?) without the drivers needing to do anything. But foremost, why such an example? It is unrelated to the question at hand. The argument that the computation would be too complex for a central planner fails because at least he has a complete picture. Individual car drivers suffer from a lot of asymmetric information when taking decisions, even altruistic ones. Note also that the example does not use the crazy qualified money scheme.

What a confusing and confused paper. You would think this would be a first draft for someone who works for the first time in the area. But no, except for the methodological silliness and conceptual errors, this paper is actually quite well written and the literature well researched, including 22 self-citations.

Tuesday, June 4, 2013

Politics does not make one happy

Humans are social animals and they benefit from interactions with others, most of the time. Personally I enjoy spending time with family, colleagues and neighbors, although I get really frustrated with anything related to politics, as may have transpired on this blog. Does this very anecdotal evidence generalize?

Stephan Humpert takes a German survey and checks how membership to various social groups effects life satisfaction. It is not a surprise there are stark gender difference, although not necessarily the way I would have thought. For example, men are particularly enthused by hobby clubs. Women particularly enjoy parents associations and citizens initiatives, and also sports societies, 26% being members in those. So much for the image of women being uninterested in sports. However, they dislike being in trade unions. Well, trade unions may follow peer pressure at work and may be understandable. Politics are rather neutral in terms of satisfaction. I guess Germans do not get as frustrated as I do, at least those involved in politics.

Monday, May 27, 2013

Why are children not the focus of our preferences?

In biology, all is about the survival and dominance of the species. It would then be logical that we only care about our offspring and its potential to have further offspring. But somehow, evolution made also care about ourselves, in fact we care a lot about ourselves. And lately we care also a fair amount about other species, but I guess modern culture is beyond the survival motive of evolution.

Luis Rayo and Arthur Robson find good reasons why we care about ourselves and how well we consume and enjoy leisure beyond fitness: ignorance. Specifically, think of the relationship between nature and an individual as that of a principal and an agent. The principal can choose the preferences of the individual, but cannot change them in light of transient circumstances. The agent is oblivious to what happens. The preferences need then to include goods that are not the primary focus of nature, for example means to ultimate goals. This is like placing money in the utility function. We do not care about money per se, but what we can buy with it, and the fact that having more leads us to save more. In the case of evolution, liking to sit in the sun makes us create a sufficient amount of vitamin D and makes us fitter for survival and procreation. But now that we know the effect of the sun and how vitamin D is important, we tend to sit too much in the sun for nature's liking. We are too informed for our preferences and should only care about our children.

Monday, May 13, 2013

Immediate rewards prompt dishonest behavior

How can one entice people to behave more honestly? An economy where people trust each other works much better, and deals come by more easily. But if no one trusts each other, one has to post collateral for every transaction and contracts need to specify everything. One could also find ways to elicit more honest behavior from people, for example by transacting in a specific context.

Bradley Ruffle and Yossef Tobol find that providing rewards later prompts more honest behavior. This conclusion comes from a neat experiment that must have necessitated a lot of arm-twisting to realize: Israeli soldiers were allowed to roll a dice, and for every scored point they could go home a half hour earlier they day of the next release. The outcome could be observed by the experimenters, but not by their superiors. When they just returned from the previous release (Sunday), the soldiers are much more honest then when the release gets closer. This should not be surprising: as the rewards is getting more discounted further in the future, one is naturally less likely to be dishonest. The real question is whether the effect the authors identify here is stronger than discounting, factoring in that there may be hyperbolic discounting. To be honest, they look at the willingness to pay for early release, but it is negligible and seems not to corroborate the experiment results.

Wednesday, May 8, 2013

On the virtues of honest apologies

Apologizing can be very hard, especially when your pride is hurt. And sometimes one opts for a fake apology, not really meaning it. But this does not really fool the apologizee, doesn't? Is the latter then unlikely to forgive? Of course, an economist has an answer to this question.

Verena Utikal performs a laboratory experiment wherein the dictator game is manipulated to sometimes keep outcomes out of the control of the dictator, who can send a message. The receiver can then act on outcome and message, but without knowing whether the outcome was a choice of the dictator. Dictators do send different messages depending on what happened, and receivers do detect lying and punish it. If you considering that there is a mental cost in lying, there does not seem to be much of a point in providing fake apologies. Yet people do it. And consider that in this experiment, all players were anonymous and did not see each other. Imagine in the real world, where they know and face each other. The cost of lying and faking must be even higher. Yet it still happens.

Tuesday, March 26, 2013

Finger length and altruism

Social scientists interested in the biological origins of human behavior have a strange obsession with finger length, and specifically with the ratio of second to fourth digit. Indeed, this ratio is an indicator to exposure to some hormones as an embryo, and any relation between this ratio and behavioral traits is a good hint that one is born with some behavioral variation. I have reported previously about entrepreneurship and risk taking in this regard, now it the turn of altruism.

Pablo Brañas-Garza, Jaromír Kovarík and Levent Neyse find that people with particularly high or low ratios are less altruistic than the norm. So, it seems that maximizing altruism is a delicate biological process, and that altruism is at least in part determined before birth. I am not sure where this paper leads us to next.