I build a network (doesn't depend on the network architecture) with a standard HDF5Data input layer creating 'data' and 'label' blobs. After I call net.forward(), the contents of the 'label' blob have changed, as seen by inspecting the contents of net.blobs['label'].data. This happens even though the only layers touching the 'label' are the standard HDF5Data input layer and the standard SoftmaxWithLoss loss layer (which is only supposed to read from it, not write to it).
Why is this happening? Also, how can I get around it to correctly read my labels from the network after it has run? I want to visualize predictions vs. ground truth labels, by doing something like this:
net = caffe.Net(test_net_filename, trained_network_model_filename, caffe.TEST)
net.forward()
last_layer = 'score'
slice_net = net.blobs['data'].data[0]
predictions = net.blobs[last_layer].data.argmax(1)
labels_net = net.blobs['label'].data[0]