Wednesday, March 23, 2011

Modelling without theory

In Economics, we have adopted the scientific method much like other sciences. As we teach our students, it consists of the following steps
  1. Observe regularities in the data.
  2. Formulate a theory.
  3. Generate predictions from the theory (hypotheses).
  4. Test your theory (is it consistent with data?)
In the context of Economics, the goal of the procedure is not only to explain why regularities in the data happened, but also to build a theory that is useful in predicting the consequences of particular policies or institutional designs.

David Hendry just published a paper about the scientific method in Economics that appears to fly in the face of what I just described. Here is an attempt to summarize his stand, and I apologize for quoting quite liberally:
  1. Specify the object for modeling, usually based on a prior theoretical analysis in Economics. An example of such an object is y=f(z).
  2. Defining the target for modeling by the choice of the variables to analyze, y and z, again usually based on prior theory. This is about deriving the data-generating process of the variables of interest, or fitting an equation with some statistical procedure.
  3. Embed that target in a general unrestricted model (GUM), to attenuate the unrealistic assumptions that the initial theory is correct and complete. The idea is to add other variables, lags, dummies, shift variables and functional forms to improve the empirical accuracy of the initial model.
  4. Search for the simplest acceptable representation of the information in that GUM. Or, now that the model has become huge (and may contain more variables than data points), let us get rid of some of them without losing too much in accuracy.
  5. Rigorously evaluate the final selection: (a) by going outside the initial GUM in step three, using standard mis-specification tests for the ‘goodness’ of its specification; (b) applying tests not used during the selection process; and (c) by testing the underlying theory in terms of which of its features remained significant after selection.
In other words, this amounts to take some linearized version of some theory, through data at it, whether theoretically relevant or not, then massage it until it fits the data.

A part from the fact that this is really the blueprint for an automated data mining exercise that is not driven in any way to answering a particular policy question, this procedure not only disregards the scientific method, but also Occam's Razor and the Lucas Critique. What use is it to learn that the CPI follows a polynomial of degree five with three lags on exports of cabbage, the number of sunny days, 25 other variables and three structural breaks (not an actual example used by Hendry, but it could)? If you want to make some very short term forecasts, that may be accurate, and this method is abundantly used in the City or Wall Street by neural networks "experts." But when it comes to advising policymakers, you need to have some Economics, and by that I mean economic theory, to explain why economic agents behave in such a way and what an intervention would lead to.

The scientific method starts with the observation of the data. Hendry dismisses this with a slight of hand, stating that stylized facts are "an oxymoron in the non-constant world of economic data." What if there are constants in economic data? In fact there are plenty, and this is what theories are trying to explain. Has Hendry never observed something in his surrounding that he then tried to explain? Or does he really spend his days feeding linear equations into his computer to see what it can come up with with his database?

Such papers, especially by people who enjoy respect like Hendry does in the UK, deeply upset me. To top it off, there are 33 self-citations.

22 comments:

VIlfredo said...

I am completely with you on this, EL. Hendry and Pesaran are the reason for the downfall of macroeconomics at Oxbridge, and the relative weakness of macro in the UK. They are immensely influential, yet so wrong. I have found it very frustrating to argue with Pesaran, who claims that this kind of models are structural and are the best to give policy advice, whereas anything that uses deep theory is completely useless. They are statisticians, not economists.

Anonymous said...

EL, I would be careful with your language. D**a m**ing is a word that should not be heard in scientific circles.

Kansan said...

Hendry starts with something from theory, give him at least credit for that...

But if he were taking theory seriously, he would use Bayesian methods, not the junk he proposes.

conchis said...

Anonymous, 'data mining' is a perfectly respectable field of study. It's not data mining's fault that you don't understand it. (Perhaps start with the wikipedia entry?)

conchis said...

This post strikes me as somewhat overblown. In part I think this is because you're misunderstanding Hendry's approach, but even if I'm wrong about that, it still seems to me like there's something useful here, and getting all upset about violating the precepts of "the scientific method" isn't a very constructive method of engagement. There are obvious dangers with a Hendry-style approach. But there are obvious dangers with 'traditional' methods as well. The challenge is to find the right balance between useful theory and overly restrictive theory, and while I'm not sure Hendry strikes the right balance, the core motivating idea – that most empirical models are more restrictive than theory actually requires them to be – seems fundamentally sound.

A couple of responses to specific accusations.

1. Hendry's approach is not (or at least need not be) theory free.

a) It seems to me that Hendry's approach is really just an extreme version of the idea that we should avoid making overly restrictive modeling assumptions when there's no strong theoretical reason to do so. Given that the 'tests' we apply to our theories are typically only valid on the assumption that we haven't mis-specified our models, this seems like a reasonable concern. It's for this reason that things like allowing for potential non-linearity via splines and so forth have become well accepted parts of an empirical toolkit. And if splines can fall within the bounds of "the scientific method", it seems difficult to argue more extreme relaxations of (often arbitrary) theoretical assumptions must be anathema to it. Whatever Hendry says, you can always restrict your GUM if you have strong reasons to do so. (The argument against doing so is that if you're right, this will be reflected in the model choice anyway, so it's unnecessary).

b) Even if you don't buy any of that, if you're so inclined, you could take Hendry's approach as functioning exclusively within the realm of step 1 of your scientific method. Then it's all about observing regularities in the data, which you can then go on to use to build (or more likely refine) your theory.

2. The approach doesn't ignore Occam's razor. The whole point of the 'simplest acceptable representation' is a formal application of Occam's razor.

In answer to your specific (albeit rhetorical) questions

Q. What if there are constants in economic data?
A. Then that will be reflected in the model ultimately selected.

Q. What use is it to learn that the CPI follows a polynomial of degree five with three lags on exports of cabbage, the number of sunny days, 25 other variables and three structural breaks (not an actual example used by Hendry, but it could)?
A. This seems vanishingly unlikely, especially given that I'm pretty sure you can tune the level of complexity allowed. But if that were actually the simplest acceptable representation' of the information then it would tell you that you're living in a very strange world indeed; and that an inability to predict structural breaks in this world is going to make your life quite difficult. Depending on what those other 25 variables are, it might also tell you a variety of other things, potentially including the fact that your preferred theoretical model going in was pretty useless, and that it might be a good idea to refocus your efforts

Kansan said...

The real world is complex and theory is a simple abstraction of the real world. Theory gives us structure in order to understand the world.

Hendry does not seem to be willing to make astractions and prefers throwing all possible data and specifications at the dependent variable to capture the complexity. This has merits in the sense that it uncovers relationships in the data (the stylized facts he seems to hate). But that does not help us in any way in understanding why things happen, only that they may be related. We still need theory.

Anonymous said...

Hendry is advocating here blind and automatic model discovery. There are no priors that would matter in any significant way. While this may be useful in establishing patterns in the data (to not call that stylized facts), in no way should this be the end of it. All his procedure can be summarized to be the first step of the scientific method.

That is all good. But Hendry is notorious for claiming that this is the end of it, too. He is ready to give policy advice based on this. And that is fundamentally wrong.

[kept anonymous for reasons that would be obvious had I put my name]

Anonymous said...

Hendry and Pesaran are holding UK macroeconomics in a stranglehold because they are so powerful. It is good to see someone calling them on this.

Anonymous said...

The sad thing is that Hendry will get plenty of abstract views and downloads for this paper...

Unknown said...

So much written and so many misunderstandings.

Not least "Hendry and Pesaran are holding UK macroeconomics in a stranglehold because they are so powerful". If this were so, I'd have walked into a top top job in the UK after my PhD under one of these two famous econometricians, and my papers would be easily published in the top UK journals, and the ESRC would award me the grants I apply for.

Since none of these things happen, I think it's fairly blindingly obvious (to quote a favourite phrase of Hendry) that the quoted statement is not true!

To respond to the minor points, Kansan you're right, Hendry does indeed start with theory. But the methods he proposes are not junk because they are not Bayesian. The whole point of the empirical part of the scientific method is to let the data speak most freely; why get in the way of that with your priors, and if they are uninformative priors, why bother with them?

To dig into the main stuff, EL you're way, way off the mark, and you make such old, oft repeated and wrong criticisms it's a little sad. Hendry's method won't factor in the number of sunny days, exports of cabbage or any of the other farcical things you suggest, because it adds functions of the variables already included in the GUM - and the GUM is based on ALL economic theory if properly done - as Hendry would insist.

The fact the latest variant of general-to-specific augments that old path-specific search from the GUM with some extra steps involving transformations of variables within the dataset, along with some deterministic variables, is nothing to throw a thrombo like this for.

Economic theory is the best place to start - all economic theory regarding the thing you're interested in. And Hendry's procedure starts there. But we start there realising that theory is only a simplified version of reality by its nature, and likely flawed. So we include all proposed theories, not just our pet one.

Reality is often much more non-linear than we can contend with, and hence it won't do any harm to investigate precisely that. If all the non-linear terms (of variables already in the dataset included based on economic theory) are not important, they will be omitted. If they are important, we should ask why they are important and inquire as to whether our functional forms in our economic theories are really appropriate.

If it ignores the Lucas Critique, it does so on purpose, because Hendry's method encompasses it. If you are going to use your model for theory purposes, you must ensure super exogeneity holds (i.e. the model parameters are invariant to large policy changes - i.e. robust to structural parameters underlying it changing).

Hence the method is perfectly useful for policy questions because not only does it take into account all theory (if done properly), but it also allows for wrong functional forms and as best possible allows the data to speak for themselves, not be contorted by any particular theory. It takes into account the possibility of underlying structural change, and hence could not be better suited to policy analysis.

I find it sad that people like EL are still unable to see how damn useful this is. Is it because they feel threatened by the fact their pet theory likely won't be supported?

Anonymous said...

Isn't there a massive abuse of degrees of freedom in all of this? We are taking about macro data here, so max 200 data points, and if you look at lots of variables with lags and various functional forms, this must eat up a lot of degrees of freedom even if many of those variables do not end up in the final equation.

Unknown said...

Why does everyone post anonymously? Scared of something?

March 24 7:56PM, degrees of freedom are taken care of - Hendry, Doornik and Castle have done a lot of research on search methods that can cope with more variables than observations. You run multiple searches based on overlapping subsets of your candidate explanatory variables and merge the resulting models and search again, to give it a simple explanation. It sounds like hocus pocus, but better is to look at the actual research than to react on gut instinct. It's a solution to the problem you're referring to where you have only a few observations but loads of possible candidate variables.

One thing I can guarantee you is that David Hendry is a huge believer in theory, and a theory basis for economic investigation. That's what underlies all of this. The 4-step process EL describes right at the top is more like a cycle. The 4th stage may lead you to rethink, and a more thorough investigation of the data will be better at forming the basis for that rethink.

Anonymous said...

Then Hendry, Doornik and Castle method may work statistically, because in each subset regression you have fewer variables than data points, but it is a fraud if you consider the number of regressions you run, as their union gives you a negative number of degrees of freedom.

Why the anonymity? Because Hendry may referee my papers.

Unknown said...

And you make the assumption he is that capricious? I think that displays the level of prejudice you have against the guy and why no matter what anyone says you will never change that.

Why exactly is running regressions fraud? Why is running more than one regression fraud? Their union won't be formed if it still have negative degrees of freedom naturally, and the process of using subsets continues until something is found - if something can be found. If nothing is found, that's it.

The way people talk about all this is as if they would automatically, machine-like take the results that come out of the other end. Automatic model selection is a fantastic tool in helping discover more about the economy to help learning. It allow MORE time for doing economics, which is something that should make it a really exciting development for everyone, so I find it yet harder to understand this blind prejudice against it.

Economic Logician said...

James, here are a few links in Wikipedia about the concerns raised here:

Data dredging

Criticism of stepwise regressions

Unknown said...

EL - it's not stepwise for a start - it's a dramatic improvement on it. I'd suggest reading some of the other research published by Hendry and his cohort, particular Doornik's stuff on comparing Autometrics to step-wise (esp. paper called "Autometrics" in Castle and Shephard's festschrift for Hendry).

On data dredging, haven't read the link and haven't got time right now but will do.

Unknown said...

Quickly on dredging too, or snooping as others call it. Again, Hendry's group is more than aware of it. In particular they've looked extensively at standard errors, and painfully calculate overall sizes and calibrate earlier tests in order to avoid distorting final outputs and minimise probabilities of mistakes. Again, reading a few of David's papers will reveal this.

As said: Hendry's procedure is not supposed to be used mechanically - but it's an invaluable extra tool for step for of your four points on the procedure. It leaves more time for the other parts - and if something is misleading, then the other three parts will make that clear.

Anonymous said...

Through /= throw. Snark fail.

Unknown said...

jb007 that's plain wrong. A simple reading of any of David's work will help you discover he usually talks of encompassing and a "progressive strategy".

To talk about encompassing shows a clear understanding that current research will be encompassed by future research (the progression in progressive) as more becomes known, and Hendry's methods actively encourage that.

He does not claim the output of Autometrics or PcGets is the end of it, I assert he never has done and I doubt he ever will.

I challenge you to point out exactly where he has done. Substantiate your slander.

Economic Logician said...

James,

I had to remove the spam comment you are referring to. For your comment to make sense, here is what it said:

"That is all good. But Hendry is notorious for claiming that this is the end of it, too. He is ready to give policy advice based on this. And that is fundamentally wrong."

Unknown said...

Thanks EL. Remarkable the extent that spammers go to these days. The spammer did manage to summarise quite well what a few of the earlier anonymous commentators would have said though!

HCG said...

This is a request to be more careful with the word “theory”, as failure to distinguish between theory and hypothesis is not just a semantic issue, but rather has consequences. The standard definition in scientific circles is that a theory is a very well tested hypothesis that has wide explanatory and predictive power. A hypothesis is “a tentative assumption made in order to draw out and test its logical or empirical consequences” (Merriam-Webster).

That said, lots of scientists are sloppy with the word, economists among them. It must be admitted that the word “theory” has much more cachet than “hypothesis”, and in fact we have at least one journal that is misnamed: the Journal of Economic Theory should be the “Journal of Economic Hypotheses”, as there is virtually no testing done there. This is not to suggest, of course, that thorough analysis of hypotheses is not useful to our understanding of the world. But imagine telling your colleagues, and the tenure committee, that you have something published in the Journal of Economic Hypotheses. It just doesn’t have a ring to it.

One example of the consequences of failing to distinguish between the two concepts can be seen in the continuing debate over the theory of evolution versus the hypothesis of creation, or as it has been renamed in a marketing ploy, “intelligent design” . Creationists argue that evolution is “just a theory”, and that intelligent design is also a theory, and therefore deserves to be included in the biology curricula of schools. This vacuous misuse of the concept of theory causes much waste of time and money.
But it won’t die easily: even law professors do “critical legal theory”, and English profs. write reams on
literary theory, critical theory and narrative theory. It is the cachet thing again.

“Theory” is a important word, but let’s not misuse it.