Dealing with microdata is relatively easy, as you have plenty of data points and can freely add explanatory variables with running the risk of running out of degrees of freedom. The story is different for macrodata, as series are much shorter, and one can quickly eat degrees of freedom by using lagged variables. The prime example here are the often abused vector autoregressions (VAR), that get larger and larger, and faster than new data points accumulate. The latest fad is to run regressions with time varying parameters, including in VARs, which is deadly for degrees of freedom as this is roughly equivalent to adding a boatload of dummy variables to the mix. Hence the need to be more parsimonious.
How parsimonious should one be? Joshua Chan, Gary Koop, Roberto Leon-Gonzalez and Rodney Strachan think the solution is in time-varying parsimony. The idea is that sometimes one needs a more complex model, and sometimes a few variables are sufficient. While this allows to spare degrees of freedom when one can do with few variables, this gain on paper is lost, and probably more than lost, by the implicit degrees of freedom used in selecting the right model. This is an old problem than is swept under the rug is many empirical applications, but in this case it becomes even more apparent because so many parameters and models are involved.
Thursday, June 30, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment