Using DetectNet with Caffe python bindings

549 views
Skip to first unread message

Darren Eng

unread,
Sep 6, 2016, 2:37:45 PM9/6/16
to DIGITS Users
Hi,

I'm interested in writing a python script using Caffe that basically does the same thing as "Test Single Image" for object detection in DIGITS. I know I could do this by just writing REST API calls but I was wondering if anyone knows how to do this directly in Caffe. Does anyone have any advice on where/how to start?

Luke Yeager

unread,
Sep 6, 2016, 4:04:46 PM9/6/16
to DIGITS Users

Gonçalo Cruz

unread,
Sep 9, 2016, 7:08:35 PM9/9/16
to DIGITS Users
Hi Darren and Luke,

I have tried to use this example as I would like to use a trained DetectNet with python.
I looked into the documentation but I am getting an error: "Transpose order needs to have the same number of dimensions as the input"

I know this example is a classification example and I have also looked into this issue.
I have tried model and deploy.prototxt files from bvlc_alexnet and bvlc_googlenet. I am using current NVCaffe flavor as well as the current Caffe.

By looking at the deploy.prototxt (input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 }), I guess that the first dim had to be changed to 1 but still it didn't solve the issue.Neither keeping it as 10.

Can you please provide some guidance on what am I doing wrong?
Best regards



I have changed the

Gonçalo Cruz

unread,
Oct 6, 2016, 7:47:36 PM10/6/16
to DIGITS Users
To follow up my previous post, I will leave here the issue that I was facing so that somebody looking for a similar problem finds a possible answer.
I was using a wrong version of googlenet and alexnet. As soon as I used a snapshot downloaded from DIGITS, I could run the example without any issues.

BR

leonard Strnad

unread,
May 2, 2017, 3:28:56 PM5/2/17
to DIGITS Users
Can we use nvidia-docker to run the same example python script?
Reply all
Reply to author
Forward
0 new messages