I have just finished reading ‘The Black Swan’ by Nassim Nicholas Taleb – the author of ‘Fooled by Randomness’ that I read a few months ago. It has been said that the two books are sufficiently alike that it is not worth reading both – I’d disagree. They do both deal with a very similar topic, but it is so rare to have that topic covered properly that it is entirely rational to read both books.
The basic premise in the book is that almost all the really important stuff that happens in the real world is down to completely unpredictable events. Since their incidence is inherently unforcastable (any forecast would be so wrong as to not be worth having) it is more useful to forecast the scale of potential impact in an area of such an event, and then systematically attempt to remove exposure to bad Black swan events and expose yourself to as many good ones as you can. What is very bad is to assume that the world is mostly made up of Gaussian distributed randomness, when actually this is only found in a limited set of cases – the vastly connected world with winner takes almost all attributes means that Black Swan events are more and more important.
As with the last book, the style is much more a polemic than a dry third person scientific paper – but feels no less well reasoned for it. And, I confess that I love the more passionate style than many books have. In that respect, this book is even more passionate than the last … and that seems to polarise readers. Well, I’m in the ‘love it’ camp.
That said, I did have a few concerns as I read the book. Not areas in which I think he is wrong but ones where I think (arrogantly) that readers might misinterpret his points. I also realise that I should have had a pen with me so I could have scribbled in the margins – there were several bits I’d like to have picked out & re-read at the end, and whilst they were clear as anything at the time, I have now forgotten them.
He has real concerns about over-application of Gaussian distribution tools (like correlation, R squared). I absolutely understand this concern, but at least in the businesses I have worked with it can be hard to get folks to even understand those. You might believe from reading this book that those who don’t understand it are at an advantage. I’d disagree. They are only at an advantage if they are incapable of keeping the limits of the applicability in their head, including the fact that they could be wrong, even when all the evidence they have indicates no such thing (‘proof of absence’ is not the same as ‘absence of proof’ as Nassim lays out more than once). As a specific example, he notes that there is a challenge in collecting enough data to determine what kind of distribution you have, and so allowing yourself to understand the confidence you should have in prediction. I absolutely agree.
But, I have seen many people not even try and use tools in assessing data – they eyeball it and declare that it ‘looks similar’, or declare one test winning over another without any reference to the margin of error (even assuming it’s linear). They haven’t yet reached the sophistication of understanding randomness at even a basic level, so whether the randomness is linear (Gaussian), non-linear (Mandelbrotian in Nassim’s terms), or fundamentally unforecastable (Black Swans prone) seems a moot point. What should be clear is that it is uncertain. Sadly this lack of understanding does not just apply to individuals – it seems to apply to the stock market and companies, who seem to value ‘consistency’ even when the consistency they seek is so fundamentally unlikely that its continuous achievement shows that they are engineering the result to be the one expected. This of course plays right into the Nassim’s thesis – everything looks just fine, with very little displayed variation, right up to the point when the consistent expected result can no-longer be achieved at all, when it all blows up ‘unpredictably’.
The next bit that jarred a bit was an implication that could be read that it is pointless to model some aspect of the world, since your model is bound to be wrong in some respect – particularly around Black Swans. I don’t think Nassim was really suggesting that – I think he was trying to say that the over-belief models was fundamentally a bad idea, since calibrating and testing the model was very hard, as well as the impossibility of handling Black Swans. I actually think that a degree of restless curiosity about how the world might be working is a good thing, and that involves building simplified models about the interactions. The challenge and risk comes from blind belief in the model, rather than keeping it in continuous review. It also matters that we have the right kind of model and linear modes are pretty limited. I have the benefit of having spent several years on non-linear systems with lots of forward feedback during my PhD, so models that appear to go wildly ‘out of control’ don’t feel unnatural, nor does the implicit lack of confidence one should have in the specific outcomes of such models. But, with the right level of skepticism, even linear formal models can be very useful. If I have guards against severe downside risk (a big if), then it can be useful to have a model that predicts slightly better than the next guy – it doesn’t have to be brilliant, just better than my competitors. Nor does it have to be always right, just more often right. The difficulty is protecting against the downside of course, but it doesn’t feel like the models and that downside protection are ‘logically incompatible’ – just that we are using linear models in completely the wrong place like Value at Risk models.
This post has got quite long enough already, so I’ll close it here. But, in closing I’d note that this is a book that I thoroughly enjoyed reading – it’s well worth the time it takes, and treating it as a dry, boring book that isn’t worth the time would be to miss something both important and enjoyable.