I have a custom model which I have trained and tested in digits on host successfully. Before downloading the model and transferring on Jetson TX2
At the end of deploy.prototxt, deleted the layer named cluster:
layer {
name: "cluster"
type: "Python"
bottom: "coverage"
bottom: "bboxes"
top: "bbox-list"
python_param {
module: "caffe.layers.detectnet.clustering"
layer: "ClusterDetections"
param_str: "640, 640, 16, 0.6, 2, 0.02, 22, 1"
}
}
Without this Python layer, the snapshot can now be imported into TensorRT onboard the Jetson.
$NET=object_detection_model
$ ./detectnet-console bottle_0.jpg output_0.jpg \
--prototxt=$NET/deploy.prototxt \
--model=$NET/snapshot_iter_33660.caffemodel \
--input_blob=data \
--output_cvg=coverage \
--output_bbox=bboxes
But after running this command this is what I am getting. Just for the Information my object_detection folder is in /home/sp while images and detectnet-console are been loaded from ~/jetson-inference/build/aarch64/bin
./detectnet-console bottle_0.jpg obj_detect.jpg \ --prototxt=$NET/deploy.prototxt \ --model=$NET/snapshot_iter_33660.caffemodel \ --input_blob=data \ --output_cvg=coverage \ --output_bbox=bboxes
detectnet-console
args (8): 0 [./detectnet-console] 1 [bottle_0.jpg] 2 [obj_detect.jpg] 3 [--prototxt=object_detection/deploy.prototxt] 4 [--model=object_detection/snapshot_iter_33660.caffemodel] 5 [--input_blob=data] 6 [ --output_cvg=coverage] 7 [ --output_bbox=bboxes]
detectNet -- loading detection network model from:
-- prototxt object_detection/deploy.prototxt
-- model object_detection/snapshot_iter_33660.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- mean_pixel 0.000000
-- class_labels NULL
-- threshold 0.500000
-- batch_size 2
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading
[TRT] CaffeParser: Could not open file
[TRT] CaffeParser: Could not parse model file
[TRT] device GPU, failed to parse caffe network
device GPU, failed to load
detectNet -- failed to initialize.
detectnet-console: failed to initialize detectNet
well my model detected objects perfectly in digits on host. But not sure what is causing the issue. @dusty-nv could you please take a look? Thanks. Though I am trying my best to solve this issue will update if anything. I also made sure I am naming all the files correctly in command line command
./detectnet-console bottle_0.jpg obj_detect.jpg --prototxt=$NET/deploy.prototxt --model=$NET/snapshot_iter_33660.caffemodel --input_blob=data \ --output_cvg=coverage \ --output_bbox=bboxes
detectnet-
console
args (8): 0 [./detectnet-console] 1 [bottle_0.jpg] 2 [obj_detect.jpg] 3 [--prototxt=/home/sp/object_detection/deploy.prototxt] 4 [--model=/home/sp/object_detection/snapshot_iter_33660.caffemodel] 5 [--input_blob=data] 6 [ --output_cvg=coverage] 7 [ --output_bbox=bboxes]
detectNet -- loading detection network model from:
-- prototxt /home/sp/object_detection/deploy.prototxt
-- model /home/sp/object_detection/snapshot_iter_33660.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- mean_pixel 0.000000
-- class_labels NULL
-- threshold 0.500000
-- batch_size 2
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/sp/object_detection/snapshot_iter_33660.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading /home/sp/object_detection/deploy.prototxt /home/sp/object_detection/snapshot_iter_33660.caffemodel
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2173:1: Expected identifier, got: }
[TRT] CaffeParser: Could not parse deploy file
[TRT] device GPU, failed to parse caffe network
device GPU, failed to load /home/sp/object_detection/snapshot_iter_33660.caffemodel
detectNet -- failed to initialize.
detectnet-console: failed to initialize detectNet