DetectNet failed to initialize when custom model is loaded on TX2

270 views
Skip to first unread message

shlok Purani

unread,
Aug 13, 2019, 10:06:24 PM8/13/19
to DIGITS Users

I have a custom model which I have trained and tested in digits on host successfully. Before downloading the model and transferring on Jetson TX2

At the end of deploy.prototxt, deleted the layer named cluster:

layer {
  name: "cluster"
  type: "Python"
  bottom: "coverage"
  bottom: "bboxes"
  top: "bbox-list"
  python_param {
    module: "caffe.layers.detectnet.clustering"
    layer: "ClusterDetections"
    param_str: "640, 640, 16, 0.6, 2, 0.02, 22, 1"
  }
}

Without this Python layer, the snapshot can now be imported into TensorRT onboard the Jetson.

$NET=object_detection_model
$ ./detectnet-console bottle_0.jpg output_0.jpg \
--prototxt=$NET/deploy.prototxt \
--model=$NET/snapshot_iter_33660.caffemodel \
--input_blob=data \ 
--output_cvg=coverage \
--output_bbox=bboxes

But after running this command this is what I am getting. Just for the Information my object_detection folder is in /home/sp while images and detectnet-console are been loaded from ~/jetson-inference/build/aarch64/bin

./detectnet-console bottle_0.jpg obj_detect.jpg \ --prototxt=$NET/deploy.prototxt \ --model=$NET/snapshot_iter_33660.caffemodel \ --input_blob=data \ --output_cvg=coverage \ --output_bbox=bboxes
detectnet-console
  args (8):  0 [./detectnet-console]  1 [bottle_0.jpg]  2 [obj_detect.jpg]  3 [--prototxt=object_detection/deploy.prototxt]  4 [--model=object_detection/snapshot_iter_33660.caffemodel]  5 [--input_blob=data]  6 [ --output_cvg=coverage]  7 [ --output_bbox=bboxes]  


detectNet -- loading detection network model from:
          -- prototxt     object_detection/deploy.prototxt
          -- model        object_detection/snapshot_iter_33660.caffemodel
          -- input_blob   'data'
          -- output_cvg   'coverage'
          -- output_bbox  'bboxes'
          -- mean_pixel   0.000000
          -- class_labels NULL
          -- threshold    0.500000
          -- batch_size   2

[TRT]  TensorRT version 5.0.6
[TRT]  detected model format - caffe  (extension '.caffemodel')
[TRT]  desired precision specified for GPU: FASTEST
[TRT]  requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]  native precisions detected for GPU:  FP32, FP16
[TRT]  selecting fastest native precision for GPU:  FP16
[TRT]  attempting to open engine cache file .2.1.GPU.FP16.engine
[TRT]  cache file not found, profiling network model on device GPU
[TRT]  device GPU, loading  
[TRT]  CaffeParser: Could not open file 
[TRT]  CaffeParser: Could not parse model file
[TRT]  device GPU, failed to parse caffe network
device GPU, failed to load 
detectNet -- failed to initialize.
detectnet-console:   failed to initialize detectNet

well my model detected objects perfectly in digits on host. But not sure what is causing the issue. @dusty-nv could you please take a look? Thanks. Though I am trying my best to solve this issue will update if anything. I also made sure I am naming all the files correctly in command line command

shlok Purani

unread,
Aug 15, 2019, 7:35:18 PM8/15/19
to DIGITS Users
A quick update from previous error After setting $NET=/home/sp/object_detection I got this error at bottom. But I am able to run sample object detection and image detection models which are been pretrained in jetson-inference tutorial.
 I am trying to figure out and solve this issue it would be great if anyone could suggest. Thanks and cheers


./detectnet-console bottle_0.jpg obj_detect.jpg --prototxt=$NET/deploy.prototxt --model=$NET/snapshot_iter_33660.caffemodel --input_blob=data \ --output_cvg=coverage \ --output_bbox=bboxes

detectnet
-
console

 args
(8): 0 [./detectnet-console] 1 [bottle_0.jpg] 2 [obj_detect.jpg] 3 [--prototxt=/home/sp/object_detection/deploy.prototxt] 4 [--model=/home/sp/object_detection/snapshot_iter_33660.caffemodel] 5 [--input_blob=data] 6 [ --output_cvg=coverage] 7 [ --output_bbox=bboxes]





detectNet
-- loading detection network model from:

 
-- prototxt /home/sp/object_detection/deploy.prototxt

 
-- model /home/sp/object_detection/snapshot_iter_33660.caffemodel

 
-- input_blob 'data'


 
-- output_cvg 'coverage'

 
-- output_bbox 'bboxes'

 
-- mean_pixel 0.000000

 
-- class_labels NULL

 
-- threshold 0.500000

 
-- batch_size 2



[TRT] TensorRT version 5.0.6

[TRT] detected model format - caffe (extension '.caffemodel')

[TRT] desired precision specified for GPU: FASTEST

[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8

[TRT] native precisions detected for GPU: FP32, FP16

[TRT] selecting fastest native precision for GPU: FP16

[TRT] attempting to open engine cache file /home/sp/object_detection/snapshot_iter_33660.caffemodel.2.1.GPU.FP16.engine

[TRT] cache file not found, profiling network model on device GPU

[TRT] device GPU, loading /home/sp/object_detection/deploy.prototxt /home/sp/object_detection/snapshot_iter_33660.caffemodel

[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2173:1: Expected identifier, got: }

[TRT] CaffeParser: Could not parse deploy file

[TRT] device GPU, failed to parse caffe network

device GPU
, failed to load /home/sp/object_detection/snapshot_iter_33660.caffemodel

detectNet
-- failed to initialize.


detectnet
-console: failed to initialize detectNet

Reply all
Reply to author
Forward
0 new messages