Seems paradoxical, right?
Philip Tetlock’s approach is to:
- Restricting and defining what you’re forecasting precisely, with a timeframe, so you can test whether your prediction was right/wrong. Ideally, the choice becomes binary by then. See metaculus. (Forecast = a clearly defined set of outcomes + a timeframe)
- Draw up all the potential things that could have an influence on the outcome you’re trying to predict
- Draw up the second order effect of those probabilities, and look at their influence on the outcome you’re trying to predict
- Take a diverse set of predictors (humans) and ensemble their probabilities. Definitely look at the distribution of probabilities as well, don’t just average them. (I think Tetlock
I believe there’s still merit in transforming more “mysteries” into “puzzles”, with the intent to find statistical edges. In new, less accessible territories, your edges may be huge enough to compensate for all the unquantifiable uncertainty involved.
Superforecasting book talk - he talks a lot about the mindset that’s “needed” for good forecasts: a diverse, agnostic kind of world view.
Also very relevant: Julia Galef’s tool to calibrate your probabilities