### Puzzles vs Mysteries

In small worlds, where risk (quantifiable uncertainty) exists, we solve puzzles, where the outcomes are limited, enumerable, as well as the decisions that you can choose between.

In large (real) worlds, we got mysteries (unquantifiable uncertainty): we canâ€™t even imagine all the different outcomes, and we can either choose from a large set of actions to take, and/or we also can change our decision over time, which makes the action (combinatorical) space explode.

The reason to distinguish between these two concepts is simple: puzzles, decision making in small world environments is radically different to managing the uncertainty of the real world.

A casino is â€śfull ofâ€ť risk, calculatable uncertainty that favours the house. Contrast that to the unknowns a real-world military operation contains.

In Machine Learning, people talk about Epistemic vs Aleatory Uncertainty.

In Finance, people talk about Knightian Uncertainty vs Risk.

In Talebâ€™s vocab, a subset of unquantifiable uncertainty would manifest itself as â€śblack swansâ€ť. In Radical Uncertainty (the book), they describe that thereâ€™s a difference between these two but I canâ€™t remember what that is (will look it up)

### Distributions, impact

Risk, quantifiable uncertainty usually follows a normal, Gaussian distribution.

Unquantifiable uncertainty is different: it has fat tails: they may happen very rarely, but their potential impact (because the impossibility to prepare for it) is an order of magnitude higher than any â€śforseeableâ€ť tail event.

Talebâ€™s argument is that â€śblack swansâ€ť are the ones that have fundamentally changed our world, and â€śforseeableâ€ť events are easy to manage.

Important differentiation: impactâ€™s distribution is also unknowns

Black swans are events from â€śunquantifiable uncertaintyâ€ť

What are the drivers

### Can we predict black swans, if weâ€™ve seen some?

The trick is, that even if â€śblack swansâ€ť appear in past data, they canâ€™t be easily (or at all?) transformed into risk, as they are so rare or such outliers, they donâ€™t fit into our quantified framework. Calculating probabilities on processes that donâ€™t follow Guassian distributions is a lot less useful than it seems like. It means super wide confidence intervals, and probably the only informative parts are to analyze potential worst case scenarios. (Iâ€™d really want to expand on this)

### Alternative definitions

Lo & Mueller have a more â€śgranualâ€ť scale:

**Level 1**:Â*Complete Certainty*. All past and future states of the system are determined exactly if initial conditions are fixed and known**Level 2**:Â*Risk without Uncertainty*. This level of randomness is Knightâ€™s (1921) definition of risk: randomness governed by a known probability distribution for a completely known set of outcomes**Level 3**:Â*Fully Reducible Uncertainty*. This is risk with a degree of uncertainty, an uncertainty due to unknown probabilities for a fully enumerated set of outcomes that we presume are still completely known. At this level, classical (frequentist) statistical inference must be added to probability theory as an appropriate tool for analysis**Level 4**:Â*Partially Reducible Uncertainty*. Situations in which there is a limit to what we can deduce about the underlying phenomena generating the data**Level 5**:Â*Irreducible Uncertainty*. Irreducible uncertainty refers to a state of total ignorance; ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.

My assumption is that Irredicuble Uncertainty is unmeasurable, but, you can increase the size and scope of â€śquantifiable uncertaintyâ€ť (risk), and it could make you somewhat of a better decision maker.

What does this distinction mean for us, humans?

Our blindspot: we ignore unquantifiable uncertainty.