As I wish to do something about sequence prediction/generation, the goal is that, given a sequence and to predict the next sequence.
For example, given a straight line with slope k, I use the formal sequence (such as [0.01, 0.02, 0.03, ..., 0.80])as training set, and I hope the network can find the pattern the line goes, then after training I feed the network with the formal sequence and hope it can generate the following sequence like [0.81, 0.82, ...].
I only use one LSTM layer to form a simple network, and the codes in Keras is shown below:
|---------------------------------------------------------------------------------------|
| model=Sequential()
| model.add(LSTM(1, 128, init='glorot_normal', truncated_gradient=-1))
| model.add(Dense(128,1))
| model.add(Activation('relu'))
| | rms=RMSprop(clipnorm=5.0)
| model.compile(loss='mse', optimizer=rms)
|---------------------------------------------------------------------------------------|
However, after training 100 epochs and the loss converges to nearly 0.003, I feed the model one sample, and predict one number at a time, iterates 100 times to generate the next 100 numbers. It shows to be not linear, just like this:
the red line is the groundtruth, and the blue line is the predictions which should be a line too but not.
I think this linear pattern is so simple that the network doesn't need to be complicated. Is there anything wrong with my model arthitecture or my train method (format of input and output), or even my idea is not right(if so, what should I do to predict sequence)?
Thanks!