👨🏻‍🦯

Our blindspot: we ignore unquantifiable uncertainty.

Risk and unquantifiable uncertainty is markably different in terms of how we, humans approach it.

“Ambiguity aversion“

A well documented experiment on the effect of uncertainty on our decision making capabilities is the Ellsberg paradox.

<write more about this>

Our tendency to ignore uncertainty, when there are known risks.

We tend to ignore uncertainty, and focus on risk when making decisions - going to the extreme, the more aspects we can quantify, the more confident we are in the quality of the decision. But that doesn’t mean we have eliminated unquantifiable uncertainty - not at all.

If we accept (can we?) assumption that models can only be based on quantifiable uncertainty, then if we base our decision on models (including narratives), then we by definition ignore unquantifiable uncertainty.

I have yet to find an academic study on this but it feels like this is very much the case if I observe my thinking.

The classic “Map is not the territory”

The argument is that this is not enough: it’s easy to ignore even risk (“quantifiable uncertainty”): if we take our model’s predicted probabilities as the likelihood of an event happening. There’s one thing missing from here: you need to factor in the known error rate of the model as well.