I have 3 questions regarding the library.
1. I am having problem using a big data sets. I am running eblearn-1.2_x64-r2631-win64 but I am getting this error: fseek to position -2147468236 failed, file is -1890488716 big, in ebl::midx<float>::mget at c:\eblearn\core\libidx\include\idx.hpp:1122.
2. I am experimenting with dropout but so far was not able to configure it in a way I like to. Maybe it is just my bad understanding of the dropout concept or my very limited understanding of the library codes. What really bothers me is the test_time parameter. This is required parameter which in training (during the bprop) only works if set to 0. So the only option is to set it to 0 for the training. But I believe that during the test phase after the training iteration is done and statistics over the train/validation sets are computed it should be set to test_time = 1 (do not drop only divide the outputs by (1-prob)). Am I missing something?
3. As far as I understand you are using stochastic gradient descent. Is mini-batch learning supported in your library?
Thank you for any answer.
Also you mentioned Torch library multiple times. Can you tell me if the CUDA support in training and in classification/detection is much better in Torch then in EBLearn?