đź’»

Quantifying some of that quantifiable uncertainty is still useful

Seems paradoxical, right?

Philip Tetlock’s approach is to:

  1. Restricting and defining what you’re forecasting precisely, with a timeframe, so you can test whether your prediction was right/wrong. Ideally, the choice becomes binary by then. See metaculus. (Forecast = a clearly defined set of outcomes + a timeframe)
  2. Draw up all the potential things that could have an influence on the outcome you’re trying to predict
  3. Draw up the second order effect of those probabilities, and look at their influence on the outcome you’re trying to predict
  4. Take a diverse set of predictors (humans) and ensemble their probabilities. Definitely look at the distribution of probabilities as well, don’t just average them. (I think Tetlock

I believe there’s still merit in transforming more “mysteries” into “puzzles”, with the intent to find statistical edges. In new, less accessible territories, your edges may be huge enough to compensate for all the unquantifiable uncertainty involved.

Superforecasting book talk - he talks a lot about the mindset that’s “needed” for good forecasts: a diverse, agnostic kind of world view.

Also very relevant: Julia Galef’s tool to calibrate your probabilities

https://juliagalef.com/calibration/