đź“™

Quantifiable Uncertainty (Risk) vs Unquantifiable Uncertainty

Puzzles vs Mysteries

In small worlds, where risk (quantifiable uncertainty) exists, we solve puzzles, where the outcomes are limited, enumerable, as well as the decisions that you can choose between.

In large (real) worlds, we got mysteries (unquantifiable uncertainty): we can’t even imagine all the different outcomes, and we can either choose from a large set of actions to take, and/or we also can change our decision over time, which makes the action (combinatorical) space explode.

The reason to distinguish between these two concepts is simple: puzzles, decision making in small world environments is radically different to managing the uncertainty of the real world.

A casino is “full of” risk, calculatable uncertainty that favours the house. Contrast that to the unknowns a real-world military operation contains.

In Machine Learning, people talk about Epistemic vs Aleatory Uncertainty.

In Finance, people talk about Knightian Uncertainty vs Risk.

In Taleb’s vocab, a subset of unquantifiable uncertainty would manifest itself as “black swans”. In Radical Uncertainty (the book), they describe that there’s a difference between these two but I can’t remember what that is (will look it up)

Distributions, impact

Risk, quantifiable uncertainty usually follows a normal, Gaussian distribution.

Unquantifiable uncertainty is different: it has fat tails: they may happen very rarely, but their potential impact (because the impossibility to prepare for it) is an order of magnitude higher than any “forseeable” tail event.

Taleb’s argument is that “black swans” are the ones that have fundamentally changed our world, and “forseeable” events are easy to manage.

Important differentiation: impact’s distribution is also unknowns

Black swans are events from “unquantifiable uncertainty”

What are the drivers

Can we predict black swans, if we’ve seen some?

The trick is, that even if “black swans” appear in past data, they can’t be easily (or at all?) transformed into risk, as they are so rare or such outliers, they don’t fit into our quantified framework. Calculating probabilities on processes that don’t follow Guassian distributions is a lot less useful than it seems like. It means super wide confidence intervals, and probably the only informative parts are to analyze potential worst case scenarios. (I’d really want to expand on this)

Alternative definitions

Lo & Mueller have a more “granual” scale:

  • Level 1: Complete Certainty. All past and future states of the system are determined exactly if initial conditions are fixed and known
  • Level 2: Risk without Uncertainty. This level of randomness is Knight’s (1921) definition of risk: randomness governed by a known probability distribution for a completely known set of outcomes
  • Level 3: Fully Reducible Uncertainty. This is risk with a degree of uncertainty, an uncertainty due to unknown probabilities for a fully enumerated set of outcomes that we presume are still completely known. At this level, classical (frequentist) statistical inference must be added to probability theory as an appropriate tool for analysis
  • Level 4: Partially Reducible Uncertainty. Situations in which there is a limit to what we can deduce about the underlying phenomena generating the data
  • Level 5: Irreducible Uncertainty. Irreducible uncertainty refers to a state of total ignorance; ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.

My assumption is that Irredicuble Uncertainty is unmeasurable, but, you can increase the size and scope of “quantifiable uncertainty” (risk), and it could make you somewhat of a better decision maker.

What does this distinction mean for us, humans?

👨🏻‍🦯Our blindspot: we ignore unquantifiable uncertainty.