Please bear with me and read this long quote from a paper in the lifecycle portfolio strategy literature:
The original literature on dynamic portfolio choice, pioneered by Merton (1969,1971) and Samuelson (1969) in continuous time and by Fama (1970) in discrete time, produced many important insights into the properties of optimal portfolio policies. Unfortunately, closed-form solutions are available only for a few special parameterizations of the investor's preferences and return dynamics, as exemplified by Kim and Omberg (1996), Liu (1999a), and Wachter (2002).
The recent literature therefore uses a variety of numerical and approximate solution methods to incorporate realistic features into the dynamic portfolio problem. For example, Brennan, Schwartz, and Lagnado (1997) solve numerically the PDE characterizing the solution to the dynamic optimization. Campbell and Viceira (1999) log-linearize the frst-order conditions and budget constraint to obtain approximate closed-form solutions. Das and Sundaram (2000) and Kogan and Uppal (2001) perform different expansions of the value function for which the problem can be solved analytically. By far the most popular approach involves discretizing the state space, which is done by Balduzzi and Lynch (1999), Brandt (1999), Barberis (2000), and Dammon, Spatt, and Zhang (2001), among many others. Once the state space is discretized, the value function can be evaluated by a choice of quadrature integration (Balduzzi and Lynch), simulations (Barberis), binomial discretizations (Dammon, Spatt, and Zhang), or nonparametric regressions (Brandt), and then the dynamic optimization can be solved by backward recursion.
These numerical and approximate solution methods share some important limitations. Except for the nonparametric approach of Brandt (1999), they assume unrealistically simple return distributions. All of the methods rely on CRRA preferences, or its extension by Epstein and Zin (1989), to eliminate the dependence of the portfolio policies on wealth and thereby make the problem path-independent. Most importantly, the methods cannot handle the large number of state variables with complicated dynamics which arise in many realistic portfolio choice problems. A partial exception is Campbell, Chan, and Viceira (2003), who use log-linearization to solve a problem with many state variables but linear dynamics…
From: Brandt, Michael W., Amit Goyal, Pedro Santa-Clara, and Jonathan R. Stroud, 2005, A simulation approach to dynamic portfolio choice with an application to learning about return predictability, Review of Financial Studies 18, 831–873.
The authors go on to present their new numerical solution method which overcomes a lot of the limitations of old solution methods, and allows more realistic modeling – IF you feel like you can only create a model if you are able to find the global optimal solution to it.
But this is my point. In all of those papers cited on lifetime portfolio choice strategy, the authors put in severe unrealism for one reason, so that they would be able to solve for the perfect globally optimal behavior of the people in the model.
What if, at least for just one paper, let alone a branch of the literature, they said, I’m not going to be severely constrained by the need to calculate the exact utility optimizing behavior. I’m going to create a really really realistic model, say of lifetime portfolio strategy, and use it as a simulator tester. I’ll try various popular and/or intuitive strategies, plug them into the model on my computer and see what expected utility score comes out.
Wouldn’t this be a great way to prove and see which strategies are better than others, with a really realistic model, that could have really complicated realistic utility and other functions – functions which wouldn’t even have to be analytical; they could be highly complicated and realistic empirical functions; functions written with multiple lines of computer code, instead of being limited to one neat compact simple equation.
Sure, such a model might be so complicated and irregular that you’ll never solve for the global optimum with a high degree of confidence, even with numerical methods and supercomputers, but you could test really well how one important and/or popular strategy compares to others in expected utils. You could use your intuition and new understandings in the field to come up with ever better strategies that test higher and higher in utils in the super realistic simulator. You could quickly test hunches in the simulator. Heck, you could even put out a prize for the first person to come out with a strategy that exceeds X expected utils in the simulator.
Why does no one do this in economics or finance? It seems in the physical sciences that they always use simulators to test things, making them as realistic as possible, instead of making them way less realistic in very important ways in order to be able to exactly calculate the perfect global optimum.
The authors go on to present their new numerical solution method which overcomes a lot of the limitations of old solution methods, and allows more realistic modeling – IF you feel like you can only create a model if you are able to find the global optimal solution to it.
But this is my point. In all of those papers cited on lifetime portfolio choice strategy, the authors put in severe unrealism for one reason, so that they would be able to solve for the perfect globally optimal behavior of the people in the model.
What if, at least for just one paper, let alone a branch of the literature, they said, I’m not going to be severely constrained by the need to calculate the exact utility optimizing behavior. I’m going to create a really really realistic model, say of lifetime portfolio strategy, and use it as a simulator tester. I’ll try various popular and/or intuitive strategies, plug them into the model on my computer and see what expected utility score comes out.
Wouldn’t this be a great way to prove and see which strategies are better than others, with a really realistic model, that could have really complicated realistic utility and other functions – functions which wouldn’t even have to be analytical; they could be highly complicated and realistic empirical functions; functions written with multiple lines of computer code, instead of being limited to one neat compact simple equation.
Sure, such a model might be so complicated and irregular that you’ll never solve for the global optimum with a high degree of confidence, even with numerical methods and supercomputers, but you could test really well how one important and/or popular strategy compares to others in expected utils. You could use your intuition and new understandings in the field to come up with ever better strategies that test higher and higher in utils in the super realistic simulator. You could quickly test hunches in the simulator. Heck, you could even put out a prize for the first person to come out with a strategy that exceeds X expected utils in the simulator.
Why does no one do this in economics or finance? It seems in the physical sciences that they always use simulators to test things, making them as realistic as possible, instead of making them way less realistic in very important ways in order to be able to exactly calculate the perfect global optimum.
No comments:
Post a Comment