Hi,
I am working on a time series classification topic and need help building my HDF5 input.
for the "data" dataset
One single example of my training set is made of 3 vectors of length 4096, each vector being a normalized time series.
If we transposed this in the image recognition space it would be like training from RGB images of height=1 and width=4096.
What is the shape expected by the HDF5_DATA layer ?
My initial assumption was the expected dimensions are N x K x H x W : N examples of K channels of height H and width W. Is that correct or not ?
So in my case it would be 300,000 x 3 x 1 x 4096
I'm a bit confused because I read elsewhere that N vectors of length 1000 are represented as a N x 1000 x 1 x 1 tensor.
If that is true then the correct dimensions for my input would then be something like 300,000 x 4096 x 3 x 1
Which approach seems correct to you ?
for the "label" dataset
The predicted label can take 500 different values.
Again what are the expected dimensions of my input layer ?
I could go for N x 1 x 1 x 1 and for each example only store the single float value that is my label or,
I could go for N x 500 x 1 x 1 and store a vector of probabilities with the correct label index set to 1 and all other values set to 0.
Thanks for your help,
François