I think part of the problem with microfoundations modeling (at least how many of them are done) is trying to force a very square peg into a very round hole.
Here's Simon Wren-Lewis:
Here's Simon again:
If, on the other hand, you're just modeling the behavior of the aggregate based on how we have actually seen it behave, not some ideal, this gives you an advantage in creating a more realistic model that can better predict, and be used to study the effects of policy. It's not without problems though, even though it has important advantages:
-- There's the Lucas critique, although sometimes this effect may be very weak and/or slow.
-- We sometimes don't have a great deal of relevant historical data to model the aggregates on.
-- There can be substantial regime change, so that past history of the aggregate(s) is not very representative, or relevant, to the present (Of course, you should look for enduring features of the aggregate(s) that survive regime shifts.)
So, like in the physical sciences (Noah gives the example of meteorology), it's best to study and model both the micro units and the aggregates as a whole.
And, it would be nice if our microfoundations models could make more realistic assumptions about the knowledge, expertise, education, self-discipline, and other behavioral factors of the micro units, i.e. people. I know this can make it very hard, or intractable, to solve the model in closed form, but why not just have it be a computer simulation, to test things with a very complicated, and mathematically and global-optimizationally intractable, but much more realistic model? That could be extremely useful.
Short of more realistic microfoundations models, let's please keep in mind, a model is only as good as its interpretation, and the smartest interpretation is usually far from literal.
Here's Simon Wren-Lewis:
It is hard [microfoundations modeling] because these models need to be internally consistent. If we think that, say, consumption in the real world shows more inertia than in the baseline intertemporal model, we cannot just add some lags into the aggregate consumption function. Instead we need to think about what microeconomic phenomena might generate that inertia. We need to rework all relevant optimisation problems adding in this new ingredient. Many other aggregate relationships besides the consumption function could change as a result. When we do this, we might find that although our new idea does the trick for consumption, it leads to implausible behaviour elsewhere, and so we need to go back to the drawing board. This internal consistency criteria is partly what gives these models their strength.It is hard then, in part, because you are trying to fit a square peg into a round hole. You're trying to fit perfect optimizing behavior of individuals ("internal consistency") to the behavior of aggregates that did NOT, in fact, result from perfect optimizing behavior of individuals. They resulted from very imperfect optimization of very imperfect individuals, with very limited expertise, information, time for analysis, and self-discipline, to name a few (and I can tell you first hand, as a businessman and family man, that with how busy and distracted people are, it can take them a while to analyze and react, even with their imperfect public knowledge, analysis, and reaction).
Here's Simon again:
It took many years for macroeconomists to develop theories of price rigidity in which all agents maximised and expectations were rational…Again, square peg, round hole. It's very hard to find a model where every single person has perfect maximization and perfect rational expectations, and you still get, at least qualitatively, the type of aggregate behavior we see in the real world, because that aggregate behavior we see in the real world is not generated from individuals who all have perfect maximization and perfect rational expectations, not even close for many things.
If, on the other hand, you're just modeling the behavior of the aggregate based on how we have actually seen it behave, not some ideal, this gives you an advantage in creating a more realistic model that can better predict, and be used to study the effects of policy. It's not without problems though, even though it has important advantages:
-- There's the Lucas critique, although sometimes this effect may be very weak and/or slow.
-- We sometimes don't have a great deal of relevant historical data to model the aggregates on.
-- There can be substantial regime change, so that past history of the aggregate(s) is not very representative, or relevant, to the present (Of course, you should look for enduring features of the aggregate(s) that survive regime shifts.)
So, like in the physical sciences (Noah gives the example of meteorology), it's best to study and model both the micro units and the aggregates as a whole.
And, it would be nice if our microfoundations models could make more realistic assumptions about the knowledge, expertise, education, self-discipline, and other behavioral factors of the micro units, i.e. people. I know this can make it very hard, or intractable, to solve the model in closed form, but why not just have it be a computer simulation, to test things with a very complicated, and mathematically and global-optimizationally intractable, but much more realistic model? That could be extremely useful.
Short of more realistic microfoundations models, let's please keep in mind, a model is only as good as its interpretation, and the smartest interpretation is usually far from literal.
No comments:
Post a Comment