N.N. Taleb’s Anti-fragile Swan Song?

imageI’m finding lots of interesting nuggets from following author Matt Ridley recently. He just posted a review of Nassim Nicholas Taleb’s book Antifragile: Things That Gain from Disorder (WSJ and blog versions). I have not read this book myself yet, but Ridley provides a synopsys of Taleb’s thesis which goes something like this: bottom-up trial and error produces more robust systems (anti-fragile, as in the title) compared to top-down planning and applied theory. Or, to quote Taleb, “We don’t put theories into practice. We create theories out of practice.”

I suspect my description is over-simplified as I am summarizing a longer review of a much longer book. However, Ridley includes a variety of Taleb’s examples from restaurant food and pharmacological medicines to the industrial revolution and the U.S. Federal Reserve. Any regular reader of my articles on innovation may think this kind of thesis fits nicely within my own experience and policies as far as driving innovation. They would be wrong.

Yes, I do agree that innovation comes from trial and error and maximizing the opportunities for ideas to have sex (to steal Matt Ridley’s phrase). None of the innovations I have been part of, directed, or witnessed, have come from successful end-to-end plans and none of the end-to-end planned innovations ever succeeded. This does not mean, however, that top-down planning and applied theory are the source of the problem. Innovation, like all of the examples provided, is a process of optimizing all available information, and top-down planning and theory can provide plenty of information that isn’t available through trial and error. Theory can be both descriptive and prescriptive depending on the accuracy of the theory and your goals. Top-down planning gives you focus on a destination, the directions to get there, and the supplies you’ll need. Bottom-up trial and error makes you change your plans along the trip, sometimes even ending the trip. But changing the plan is different from having no plan.

The problem with top-down planning is if you only use top-down planning. Likewise there is problem when you only use bottom-up trial and error. The details matter. There is no general rule regarding bottom-up versus top-down. The quality of information, experience, and the complexity of the system in question are all important details as is your flexibility in adapting to changes including updating the top-down plan.

The problem with Taleb’s thesis, as presented in Ridley’s article, is that it takes an extreme all-or-nothing perspective. Sure, one can demonstrate some successes of trial and error without planning and some failures of top-down planning. But there are also cases of bad outcomes with trial and error (hence the “error” part) and successes of top-down theory. Bottom-up trial and error is even sometimes impossible or expensive compared to top-down planning, especially in cases where you only get one shot and failure is catastrophic. For example, good policy on climate change doesn’t come from trying out different scenarios in the real world and finding out which ones lead to environmental and economic catastrophe and picking ones that don’t. We don’t get do-overs.

Ridley comments on Taleb’s biological examples, including the bottom-up trial and error approach of evolution by natural selection:

Biological evolution, too, is anti-fragile. The death of unfit individuals is what causes a species to adapt and improve.

Indeed, that is exactly how natural selection works, at least the “survival of the fittest” component of it, and as long as you define “improve” to mean in terms of the maximizing the local reproductive success of genes. (The results of sexual selection and genetic drift may not be considered “improvement” in some respects, for instance.) But that isn’t the whole story. Dinosaurs may have been very robust (anti-fragile?) to their environment but they couldn’t avoid being wiped out by an asteroid. We can avoid them because we have something they didn’t: top-down planning. We can apply theory to find them. We can apply theory to predict which ones will hit us. We can apply theory to divert them.

Of course these are all fairly simple systems with robust and accurate models. Things like climate change and hurricane paths are more complex and only predictable in bulk properties. Yet still top-down applied theory and planning are quite successful. We know days in advance of hurricanes where to evacuate and prepare our resources, and what level of damage to be prepared for. This ability is new by applying scientific predictive theory. A century ago we had no warnings.

He even gets much of modern medicine wrong, claiming that it is a ” ‘craft built around experience-driven heuristics’ that had to fight against entrenched, top-down theorizing from Galen and other wise fools.” But almost all modern medicines derive from applied theory of various processes in the body and principles of evolution, and the new treatments are then tested to see if they do as expected (from theory) and if they have side effects that were not planned.

Even Galen is the wrong enemy here. While his peers were mainly divided into Rationalists (theory) and Empiricists (experimentation), among Galen’s many contributions to early medicine was his regard for medicine “as an interdisciplinary field that was best practiced by utilizing theory, observation, and experimentation in conjunction.” He was truly a valuable influence on the concept of science and optimizing the use of information from all sources.

Similarly Taleb seems to pick on Nobel Prize winning economist Joseph Stiglitz largely because of Stiglitz is a Keynesian economist who builds and applies theory versus Taleb’s “street smarts” trial and error. Taleb’s complains that “Globalization creates interlocking fragility, while reducing volatility and giving the appearance of stability.” Among Joseph Stiglitz’s complaints about globalization is that “the IMF has often called for policies that conform to textbook economics but do not make sense for the countries to which the IMF is recommending them.” Stiglitz, like Taleb, is very much anti-textbook as far as economic policies, and very much against the destabilizing effects of finely planned tariffs, subsidies, and complex patenting systems. He is a big proponent of understanding the particular circumstances and doing what the empirical evidence suggests, unlike free-market economists (or “free-market fundamentalists”, as he calls them) who have blind faith that simple ideological ideas and free markets will magically solve problems.

From my own perspective, the problem of fragility and destabilization is a function of purely maximizing efficiency, which markets often do well (but not always), largely by optimizing comparative advantage (specialization and trade) such that much of the needs of the world are compartmentalized and accomplished by those who are best at it. This optimization of efficiency comes at the cost of robustness to the very unexpected destabilizers that Taleb worries about. If all of the world’s food is produced by a single region or group who is most efficient at it, a single failure of that region or group can starve the world. Robustness typically means redundancy which is contrary to efficiency.

As an example, many of the systems I worked on as a prime contractor for NASA were Criticality 1, meaning missions could fail or astronauts could die if the equipment or processes failed. This invariably meant triple redundancy such that catastrophe was at least “three failures deep”, and that these redundant systems were independent of one another. We designed these systems by applying theory and demonstrated them through rigorous empirical stress testing, including random, unexpected events as Taleb’s work warns against (and seems to think he invented).

Taleb seems to misunderstand this process. Certainly theory comes from experience, consistent with the second half of Taleb’s thesis statement, but we also most certainly do put theory into practice, and with much success, contradicting the other half. This fact is rather obvious for we engineers and applied scientists. I don’t mean that in a “common sense” perspective; I mean it is how we do our jobs. Essentially every bit of modern technology and medicine comes from applying theory top-down to generate the very things that we test via trial and error bottom-up. We don’t just randomly try things. Heck, just recently Nate Silver repeated his smack-down demonstration of applying top-down theory to election prediction, kicking the ass of “street smart” pundits, as has sabermetrics in perfecting “Moneyball”.

So how could Taleb be so far off? Is it because he has little understanding of science, engineering, medicine, biology, and even economics as “a former trader and expert on probability” and “a self-taught philosopher steeped in the stories and ideas of ancient Greece”? This is where it gets confusing, because I’m not entirely sure that Taleb says what Ridley is suggesting.

Taleb is more than just some former trader with self-taught Greek philosophy. Nassim Nicholas Taleb has been a professor at several universities (including Oxford), a hedge fund manager, a practitioner of mathematical finance, an adviser to the IMF, and writer of the books Fooled by Randomness and the best selling Black Swan which has been described as “one of the twelve most influential books since World War II”.

Black Swan theory, the basis of Taleb’s thesis, does not contradict with applying theory or top-down planning. Rather, it is complementary to such planning. Applied theory works on known behaviours and risks. Black Swan theory works on unknown behaviours and risks. Further confusing where Ridley notes

If trial and error is creative, then we should treat ruined entrepreneurs with the reverence that we reserve for fallen soldiers, Mr. Taleb thinks.

In fact, this sentiment is the subtitle of the WSJ version of the article. Yet this is in direct contradiction of Malcolm Gladwell’s version of Taleb’s views in the New Yorker:

We associate the willingness to risk great failure – and the ability to climb back from catastrophe – with courage. But in this we are wrong. That is the lesson of Taleb and Niederhoffer, and also the lesson of our volatile times. There is more courage and heroism in defying the human impulse, in taking the purposeful and painful steps to prepare for the unimaginable.

So which is it? Does Taleb think we should treat failed entrepreneurs with reverence for their risk-taking courage, or with disdain for not taking purposeful and painful steps to prepare for the unimaginable?

On policy the contradictions stand out even more. He hates top-down applied theory planning in lieu of bottom-up trial and error, even in the face of progress that is clearly a result of applying good theory and planning to drive such trial and error, such as is common in medicine and engineering. Once again, the dinosaurs would still be alive had they been able to do top-down planning for avoiding asteroids, since their trial and error genes had not supplied them with the capability of surviving such catastrophe. And yet it is this very sort of unexpected dinosaur-killing asteroid that Taleb thinks we should plan for by applying his theories.

When I compare Ridley’s review with any other reporting of Taleb they seem to describe two very different points of view. Ridley’s Taleb is filled with childish strawman reasoning like “You don’t need a physics degree to ride a bicycle,” or that he “systematically demolishes what he cheekily calls the ‘Soviet-Harvard’ notion that birds fly because we lecture them how to-that is to say”. This version of Taleb is a simpleton who says tripe like

Planning is inherently biased toward delay, complication and inflexibility, which is why companies falter when they get big enough to employ planners.

or this gem of stupidity:

A law that bailed out failing restaurants would result in disastrously dull food. The economic parallel hardly needs spelling out.

That’s because there isn’t an economic parallel, at least with the recent bailouts. It is a bad analogy. There are thousands of independent restaurants and the ability to make meals at home, among a variety of alternatives. Of course there’s no reason to bail out a failing restaurant. Now imagine if there were only a small handful of restaurant chains and in addition to food preparation they also controlled most of the national food production and distribution, and they are interlinked. Now a complete failure of one or several restaurant chains means starvation of millions. Now would you consider a bail out? Would you consider a set of regulations for reducing reliance for so much on so few? That is the “too big to fail” problem as applied to large and few banks that control the vast majority of lending and investment. Perhaps This Taleb needs some remedial lessons.

Yet on further review, he knows this. In the Black Swan he wrote

Financial Institutions have been merging into a smaller number of very large banks. Almost all banks are interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks – when one fails, they all fall. The increased concentration among banks seems to have the effect of making financial crisis less likely, but when they happen they are more global in scale and hit us very hard. We have moved from a diversified ecology of small banks, with varied lending policies, to a more homogeneous framework of firms that all resemble one another. True, we now have fewer failures, but when they occur …. I shiver at the thought.

Indeed, that sounds like the wise version of Taleb. Even he agrees that concentration and interconnectedness are the problem and we should avoid them. This is exactly the problem of instability by over-efficiency I describe as my view above. Yet Taleb seems to generally oppose regulations to keep this from happening, as Stiglitz and other Keynesian economcists recommend, and is opposed to bailing them out when they do fail. All his suggestions amount to us getting used to shivering.

There is so much contradiction in policy, seriousness, and level of intellectual content that I’ve thought either Ridley has completely misunderstood him or Taleb has gone off the deep end of bipolar disorder.

Sadly, these sorts of over-simplified comments appear to be consistent with his general views on economics and intellectualism. He actually does seem to simultaneously hate applying theory while providing recommendations on how to apply his theories. He simultaneously “believes that universities are better at public relations and claiming credit than generating knowledge” while being a Distinguished Research Scholar at Oxford University as well as affiliated with many other universities and performing much academic research.

Taleb comes off as either an eccentric genius or a narcissistic simpleton troll. The reality seems to be somewhere between the two. Just when he makes a good point for a specific circumstance he seems to follow it up with a non sequitur generalization aimed at a strawman caricature of how things are done.

I think I get what Taleb is trying to say. Relying only on theoretical models without regard for how they fit reality is unwise. It’s just that he doesn’t seem to realize this is not how people generally use models. We do usually make good use of them for top-down decision a directions and guidance, not gospel dogma. We do usually test them. We do usually limit their application to the assumptions built into them. We do usually change plans when the circumstances call for it. Perhaps the trading world that Taleb got his experience is a little different. If so, perhaps he should limit his criticisms to those areas. Generalizing them as he appears to be doing just makes him look ignorant of what people actually do, and a little self-contradictory at that.

He really does appear to want us to embrace trial and error while avoiding top-down planning and applied theory, unless it is his theories that we are applying for top-down planning purposes. If it’s the Federal Reserve that sees the economic asteroid coming and has the engineering plan to divert it, it seems Taleb wants us to just accept our fate and suffer the consequences. At least he’ll be able to profit from it using his Black Swan scheme.

Ultimately I have to agree with Ridley on the uncertainty of Taleb’s value, though possibly for different reasons; Ridley seems to have mixed feelings about some of Taleb’s arguments and examples whereas I find Taleb is all over the map and full of contradictions. Perhaps more importantly, I haven’t read this particular book. I do hope to read it and report back, but trial and error has shown me that an unexpected event may result in me doing something more valuable with my time, so maybe I shouldn’t plan on it.