bad_alloc error - training a network on large datasets causes memory errors during detection?

3 views
Skip to first unread message

Nate V

unread,
Aug 2, 2016, 11:51:14 AM8/2/16
to eblearn
I've previously been successful in training and running detection with several networks in eblearn.

I created a new dataset using the "jitter" options provided by dscompile. I then trained a new network using this dataset and the same configuration file I've used with the other networks-- same architecture, same everything except for the dataset used for training.

When I attempt to run detection with this new network and the same configuration file I've been using, I receive a bad_alloc error that traces back to detection_thread.hpp:375. I've traced the original source of the error to detection_thread.hpp:263 (call to detector::fprop) and memory usage at least an order of magnitude greater than previous networks.

Nothing is different about this new network other than a larger training dataset created with jitter.

Why would this network train without a problem but consume absurd amounts of memory during detection?

Reply all
Reply to author
Forward
0 new messages