Thursday, May 14, 2009

Scientific does not mean simple-minded

William Easterly writes today:

Airline passengers recently ejected an innocent Muslim family from an airplane because they were afraid the family were terrorists. Similar reasoning explains why Dani Rodrik favors industrial policy as a key to success...

All of us are making the amazingly common mistake of REVERSING CONDITIONAL PROBABILITIES. The airline passengers perceived from media coverage that the probability that IF you are a terrorist, THEN you are a Muslim is high. Unfortunately for the poor family, the passengers confused this with the relevant probability, which is the chance that IF you are a Muslim, THEN you are a terrorist (which is extremely low even if the first probability really is high, because terrorists are very rare).

So here is Dani Rodrik on success and industrial policy: “the countries that have produced steady, long-term growth during the last six decades are those that relied on a different strategy: promoting diversification into manufactured … goods” (cited in Economist’s View).

So Dani concludes, “What matters [for growth in developing countries] is their output of modern industrial goods” and that developing countries will have to get busy with “real industrial policies.” Finally, “external policy actors (for example, the World Trade Organization) will have to be more tolerant of these policies.”

Unfortunately, Dani is also REVERSING CONDITIONAL PROBABILITIES. Dani’s evidence is based on what he believes is the high probability that IF you have had steady growth for six decades, THEN you had industrial policy. This is interesting, but this is not the right probability in deciding whether to choose industrial policy, which is “IF you have industrial policy, THEN what is your chance of steady growth for six decades?”

This second, correct, probability would seem to be pretty low, since many other countries -- especially African and Latin American -- extensively tried industrial policies over the past six decades with low and erratic growth as a result...

I am really going through a MAJOR Mlodinow slash Kahneman phase about how economists (present company included) misinterpret data.

Here is what I think is the biggest overall way that economists misinterpret data: They think scientific means simple-minded, like you aren't scientific unless you assume away (and really take these assumptions literally, or substantially overly literally, act as though they are actually true in making conclusions for the real world) all of the complexity, all of the non-formal evidence and information, and assume everything relevant all fits into an extremely unrealistically simple formula, turn off your high-level, high-dimensional flexible thinking (which can still be rock solidly logical, with each link in the logic chains completely solid, even if it's not written in fancy looking, but really grossly over-simplified mathematics), and ignore any a priori information that's not formal and published in an academic journal in your field, no matter how important and compelling it is.

Harvard Economist Dani Rodrik is clearly an extremely proficient and regular user of high level intelligence. He is not saying that any industrial policy is a sufficient condition for strong movement out of poverty, he is just saying that the formal simple statistical data suggests that it can substantially increase the odds, and if you read his books, you see that he constructs very strong logic chains using less mathematical and formal, but also less simple, evidence to add a great deal of support to the simple statistical data.

Yes, there are a lot of countries that used at least some form (not necessarily a good one) and some amount (not necessarily relatively large and sustained) of industrial policy and failed, but that doesn't mean that industrial policy (good industrial policy) often cannot substantially increase the odds and amount of success, because there is a lot more to success than just having any industrial policy.

Every ATP tour tennis pro plays and practices an average of more than 10 hours per week, but only a fraction of 1% of people who play and practice more than 10 hours per week achieve ATP tour status. So are you going to make the simple-minded argument that it doesn't matter if you play and practice more or less than 10 hours per week, that it doesn't matter how much you play and practice per week in your probability of becoming an ATP tour pro? Or that "it's impossible to tell" (This is one of the lines I find most irritating. It's usually highly inaccurate and misleading. Often the evidence, all of the evidence at hand, although it won't tell you for sure, it will tell you that one possibility is much more likely than another, but people tend to feel so smart and level-headed saying "It's impossible to tell".)

First, clearly, if playing and practicing heavy hours was unrelated to success it would be very unlikely that it would just purely by chance end up existing in every single success story, but be relatively rare in the non-success stories. Now, there is data mining, but to see whether this is a random artifact of data mining, you should look at more than only the simple statistical data. You should use your high level intelligence and look at all of the evidence, even if it doesn't fit into a simple mathematical formula. You look at human biology, physiology, neurology; you look at evidence from sports in general, etc., etc. From this you can construct extremely compelling, rock solid logic chains, anchored to just very reasonable and mild assumptions (as opposed to the kind typically used by Chicago-style economists), showing that how much you play and practice matters greatly in your probability of achieving ATP pro status, even if those logic chains don't consist of solely highly simplifying mathematical symbols.

And, by the way, I do think advanced mathematics can be very useful, and use it myself, but only if it's used and interpreted intelligently.

You can come to some pretty ridiculous conclusions if you ignore the vast majority of the evidence because it does not fit some snobbish and simple-minded definition of what's scientific. Here's my definition of what's scientific, or at least the most important part of it: It's logical. The conclusions to the real world that these snobbish pseudo-scientific types make often don't meet this definition (see Chicago School).