Hi
is there someone that can please explain me the architecture that this instruction makes (better graphically)?
model.add(LSTM(128, return_sequences=True, input_shape=(look_back, 1)))
In some discussion I found that 128 are the LSTM units of the hidden LSTM layer, in others I found 128 are the LSTM cells of the layer (and each cell has look_back LSTM unit).
Can someone that knows the python code of this instruction (where I can find it?) tell me which kind of architecture it makes? How many single LSTM (meaning forget gate, input gate, output gate) are there?? 128?128xlook_back?Others?
Thanks
Denis