I have constructed a network by myself and use it for training facial landmarks. I can successfully train the network with my own train.prototxt and solver.prototxt. But something strange happen in my testing phase. As I want to get the output of the testing images, so I write the documentations for testing images in python (like the examples: 00-classification.ipynb). When I load the caffemodel and network to test, the python displays the error: Out Of Memory!
from include import *
from caffe import *
import pylab
import os
# set display defaults
plt.rcParams['figure.figsize'] = (5, 5) # large images
plt.rcParams['image.interpolation'] = 'nearest' # don't interpolate: show square pixels
plt.rcParams['image.cmap'] = 'gray' # use grayscale output rather than a (potentially misleading) color heatmap
root = '/home/ga47kes/master/300w/data/python/'
test_net = root+'deploy.prototxt'
caffe_model = root+'deploy.caffemodel'
net = caffe.Net(test_net, caffe_model, caffe.TEST)
### import test images
data_dir = '/home/ga47kes/master/300w/data/image_path/test/'
transformer=caffe.io.Transformer({'data':net.blobs['data'].data.shape})
transformer.set_transpose('data',(2,0,1))
im = caffe.io.load_image('/home/ga47kes/master/300w/data/300wData/IBUG_(300-W)/rot0/96x96/img/image_0001.png')
transformed_image = transformer.preprocess('data', im)
transformed_image = transformed_image[0,:,:]