Goodman’s “New riddle of induction” is very much related [need to process it]
We’ve talked about “models are only useful until the underlying process that generates the data does not change”. Could we detect when the underlying process changes, then?
I assume this would be a “trying to predict a phase transition” kind of thing.
A couple of examples of models like this:
- “Buy the dip” (on almost any stock indices) was an excellent trading/investment strategy for the last 14 years.
- The unprecedented economic growth of the last country
- The unprecedented advancements in science / technology of the last country
- The seemingly plateauing population growth of humans
- The unprecedented increase in life expectancy of humans of the last country
- Any kind of economic or societal model that has only observed the “regime” of the 20th century
My hypothesis is that the more well known a model is, the less useful it is for a participant or decision maker, and the more useful the exception OR a prediction when it’ll start breaking is.
The problem is:
How do you model something that you may have very little (<5 samples) or no data of?
Assuming that the underlying process is “hidden”, are there proxy metrics that would suggest that the process has changed, or is changing, or will change?
Maybe, for some easy-to-exploit “statistical edges”, that moment is when that edge becomes too widely known.
Maybe, monitoring the “volatility” of the process could be useful information. If we reference Taleb’s idea that “suppressing / compressing volatility will increase a shock’s probability”, then decreasing volatility can be a signal?