I just re-read Tushar Chande's book "Beyond Technical Analysis" (Wiley, 1997) and, in it, he devotes an entire chapter to the concept of "data scrambling."
As he describes it, data scrambling involves analyzing the price change relationships from one price bar to the next and then using a random number generator to scramble the sequence of these price bar changes to create a new "synthetic" set of price data, with the understanding that a price bar change can appear more than once in the new, scrambled sequence. In his words, data scrambling "is the most rigorous out-of-sample testing you can achieve."
What do people here think about that statement? On a basic level, Chande's approach makes some degree of sense, but I wonder if you lose something concerning the multi-price-bar behavior of a particular market by scrambling the data in the way that Chande describes.
For example, the way that the US stock market goes up over weeks and months (usually pretty gradually) is quite different than the way it goes down (very dramatically) - wouldn't scrambling data lose track of that behavior? If so, is there some way to take longer samples of data (instead of bar-to-bar) and use that to scramble a sequence ... or would we still be at risk for some kind of error in judgment?