I created a new dataset using the "jitter" options provided by dscompile. I then trained a new network using this dataset and the same configuration file I've used with the other networks-- same architecture, same everything except for the dataset used for training.
When I attempt to run detection with this new network and the same configuration file I've been using, I receive a bad_alloc error that traces back to detection_thread.hpp:375. I've traced the original source of the error to detection_thread.hpp:263 (call to detector::fprop) and memory usage at least an order of magnitude greater than previous networks.
Nothing is different about this new network other than a larger training dataset created with jitter.
Why would this network train without a problem but consume absurd amounts of memory during detection?