> Greg Heath wrote:
> > On Oct 9, 3:36 pm, TomH488 <tom...@gmail.com> wrote:
> >> Is this a sound rule?
> >> Would be very handy to determine if an input should be differenced.
> >> But regarding differencing, what about the Constant?
> >> I have found a fuzzy relationship where stock indices which are
> >> slower moving, favor no-differencing, while commodities, which are
> >> faster moving, favor differencing.
> >> Then there are twice differenced inputs. Again what about the
> >> Constant?
> >> If the net needs the constant information, how is that incorporated
> >> into the inputs?
> >> Thanks
> >> Tom
> > Not familiar with PACF.
> > Greg
> Probably Approximately Correct (framework)? (Fuzzy)?
> How come the work "spike" is in the Subject: line, but not in the posting?
Partial AutoCorrelation Functions help determine the significant
feedback lags in time series. "Partial" takes into account previously
incorporated lags in a stepwise procedure. (see Wikipedia for a very
I assume that Partial CrossCorrelation Functions would help determine
the significant input lags in a time series.
Perhaps a good reference will explain which is done first or are both
On Oct 9, 3:36 pm, TomH488 <tom...@gmail.com> wrote:
> Is this a sound rule?
> Would be very handy to determine if an input should be differenced.
> But regarding differencing, what about the Constant?
> I have found a fuzzy relationship where stock indices which are slower moving, favor no-differencing, while commodities, which are faster moving, favor differencing.
> Then there are twice differenced inputs. Again what about the Constant?
> If the net needs the constant information, how is that incorporated into the inputs?
Constants only affect biases and, if large, can affect numerical
accuracy. That is why I try to standarize (zero-mean, unit variance)
variables before combining them.
Classical time series analysis assumes partial stationarity. That is,
a certain number of low order summary statistics are constant
throughout the series. If not, differencing can remove linear trends,
second differencing can remove quadratic trends, etc.
Removing non-stationarity and normalization should be done before
determining significant lags via partial correlations.
I'm not sure, if this is necessary for nonlinear time series using
That's because I don't know if correlation functions are useful when
nonstationarity is present.
"That's because I don't know if correlation functions are useful when
nonstationarity is present."
I wonder if Nonstationarity is responsible for predictions which have the proper shape but have a "DC" offset from the actual values?
Suppose a NN is trained with results that are from a stationary interval, but the inputs span backwards in time, a number of stationary intervals?
What if there were Stationary Type Flag Columns? Suppose you can say stationaryity changes no faster than by Quarter (3 month period)? So if you are using 4 years of data for the project, you would provide for the possibility of 4x4=16 regions of uniform stantionarity. This would involve 16 columns with a unit step for each region.
However, how would you deal with cases that use 2 years of previous history? or 8 Stat intervals? It almost like you need a 2nd dimension for each column to show what interval that column came from? Would this be something you would hand with a Complex Number Nnet?
Still, if a case had the stationarity flag associated with the prediction, it would still set up a matrix of "weight switches" that could only help predictions.
CONTRADICTION (?): Output Lags in Input are Highly Correlated (Cxx).
If Output = P (price), it is customary to have input columns that are lags of P.
This results in both high Cxx and Cxy violates the desire to have no high Cxx.
Is this a problem or should Lags of Outputs be omitted from Cxx review?
Is it possible to get lags such that they have low Cxx and high Cxy?
I see that being an issue. One possible approach:
If Cxx and Cxy input culling is legitimate, might that not also hold for ACF and PACF cullings?