最新网址:www.llskw.org
I
even recently got an e-mail asking me to list the next ten Black Swans.
Most fail to get my point about the error of specificity, the narrative fallacy,
and the idea of prediction. Contrary to what people might expect, I
am not recommending that anyone become a hedgehog鈥攔ather, be a fox
with an open mind. I know that history is going to be dominated by an improbable
event, I just don't know what that event will be.
Reality? What For?
I found no formal, Tetlock-like comprehensive study in economics journals.
But, suspiciously, I found no paper trumpeting economists' ability to
produce reliable projections. So I reviewed what articles and working papers
in economics I could find. They collectively show no convincing evidence
that economists as a community have an ability to predict, and, if
they have some ability, their predictions are at best just slightly better than
random ones鈥攏ot good enough to help with serious decisions.
The most interesting test of how academic methods fare in the real
world was run by Spyros Makridakis, who spent part of his career
managing competitions between forecasters who practice a "scientific
method" called econometrics鈥攁n approach that combines economic theory
with statistical measurements. Simply put, he made people forecast
in real life and then he judged their accuracy. This led to the series of
"M-Competitions" he ran, with assistance from Mich猫le Hibon, of which
M3 was the third and most recent one, completed in 1999. Makridakis
and Hibon reached the sad conclusion that "statistically sophisticated or
complex methods do not necessarily provide more accurate forecasts than
simpler ones."
I had an identical experience in my quant days鈥攖he foreign scientist
with the throaty accent spending his nights on a computer doing complicated
mathematics rarely fares better than a cabdriver using the simplest
methods within his reach. The problem is that we focus on the rare occasion
when these methods work and almost never on their far more numerous
failures. I kept begging anyone who would listen to me: "Hey, I am an
uncomplicated, no-nonsense fellow from Amioun, Lebanon, and have
trouble understanding why something is considered valuable if it requires
running computers overnight but does not enable me to predict better
than any other guy from Amioun." The only reactions I got from these
THE SCANDAL OF P R E D I C T I O N 1 5 5
colleagues were related to the geography and history of Amioun rather
than a no-nonsense explanation of their business. Here again, you see the
narrative fallacy at work, except that in place of journalistic stories you
have the more dire situation of the "scientists" with a Russian accent
looking in the rearview mirror, narrating with equations, and refusing to
look ahead because he may get too dizzy. The econometrician Robert
Engel, an otherwise charming gentleman, invented a very complicated statistical
method called GARCH and got a Nobel for it. No one tested it to
see if it has any validity in real life. Simpler, less sexy methods fare exceedingly
better, but they do not take you to Stockholm. You have an expert
problem in Stockholm, and I will discuss it in Chapter 17.
This unfitness of complicated methods seems to apply to all methods.
Another study effectively tested practitioners of something called game
theory, in which the most notorious player is John Nash, the schizophrenic
mathematician made famous by the film A Beautiful Mind. Sadly,
for all the intellectual appeal of these methods and all the media attention,
its practitioners are no better at predicting than university students.
There is another problem, and it is a little more worrisome. Makridakis
and Hibon were to find out that the strong empirical evidence of
their studies has been ignored by theoretical statisticians. Furthermore,
they encountered shocking hostility toward their empirical verifications.
"Instead [statisticians] have concentrated their efforts in building more sophisticated
models without regard to the ability of such models to more
accurately predict real-life data," Makridakis and Hibon write.
Someone may counter with the following argument: Perhaps economists'
forecasts create feedback that cancels their effect (this is called the
Lucas critique, after the economist Robert Lucas). Let's say economists
predict inflation; in response to these expectations the Federal Reserve acts
and lowers inflation. So you cannot judge the forecast accuracy in economics
as you would with other events. I agree with this point, but I do
not believe that it is the cause of the economists' failure to predict. The
world is far too complicated for their discipline.
When an economist fails to predict outliers he often invokes the issue
of earthquakes or revolutions, claiming that he is not into geodesies, atmospheric
sciences, or political science, instead of incorporating these
fields into his studies and accepting that his field does not exist in isolation.
Economics is the most insular of fields; it is the one that quotes least
from outside itself!
请记住本书首发域名:www.llskw.org。来奇网电子书手机版阅读网址:m.llskw.org