How do you guys think?
Cross-validation, sometimes called rotation estimation, is the statistical practice of partitioning a sample of data into subsets such that the analysis is initially performed on a single subset, while the other subset(s) are retained for subsequent use in confirming and validating the initial analysis.
The initial subset of data is called the training set; the other subset(s) are called validation or testing sets.
I don't see the problem. Using your example if throughoptimization it is detemined that the best parameters
for January are 5,20 then these are used to trade the month
of February and the results become part of the out-of-sample
performance. Next an optimization is done with data through
to the end of February then these (possibly) new parameters
are used to trade March and the performance for March
is appended to the out-of-sample performance data. And so on.
Theoretically, a dynamic or real time optimization maybe interesting,
let's say while JBT doing forward/trading session, an optimization
process keeps optimizing past daily market data, and update the
trading strategy with latest parameters …, sounds crazy, but may be
interesting for those extreme momentum trading strategies.
I thought about dynamic parameters, too. This would become a sort of a "meta-optimization". For example:1. Optimize on a particular subset of data (say the last 20 trading days)
Yes that could work. Would you add the 20 (days) as a parameter optimizable?