Seems paradoxical, right?

Philip Tetlockâ€™s approach is to:

- Restricting and defining what youâ€™re forecasting precisely, with a timeframe, so you can test whether your prediction was right/wrong. Ideally, the choice becomes binary by then. See metaculus. (Forecast = a clearly defined set of outcomes + a timeframe)
- Draw up all the potential things that could have an influence on the outcome youâ€™re trying to predict
- Draw up the second order effect of those probabilities, and look at their influence on the outcome youâ€™re trying to predict
- Take a diverse set of predictors (humans) and ensemble their probabilities. Definitely look at the distribution of probabilities as well, donâ€™t just average them. (I think Tetlock

I believe thereâ€™s still merit in transforming more â€śmysteriesâ€ť into â€śpuzzlesâ€ť, with the intent to find statistical edges. In new, less accessible territories, your edges may be huge enough to compensate for all the unquantifiable uncertainty involved.

Superforecasting book talk - he talks a lot about the mindset thatâ€™s â€śneededâ€ť for good forecasts: a diverse, agnostic kind of world view.

Also very relevant: Julia Galefâ€™s tool to calibrate your probabilities