Developping a new set of images

65 views
Skip to first unread message

Julien Rhapsodos Girard

unread,
May 31, 2016, 1:33:52 AM5/31/16
to Caffe Users
Good afternoon,


I currently work on a project using py-faster-rcnn running on a distant server, communicating datas to an embedded device (smartphone or Moverio glasses). The goal of the device is to be used in households to identify objects, and display informations about them on screen.

I installed py-faster-rcnn and I am currently doing some tests with the hardware listed below.
Altough I am impressed with the results, I would like to improve the processing speed in order to achieve near-realtime rendering.

I have several questions concerning this issue:

- I noticed that using the ZF network gives me a faster processing (about 0.075s per image for ZF, 0.155s for vgg16).
What are the differences between those two networks? 
- Some of the classes used in the basic installation are not relevant within my project: how can I make my network not recognize them? 
- If it happens to be necessary, I am thinking on designing my own training datasets. Do you have any suggestion on
how to proceed?

Hardware:
Geforce GTX Titan X
Intel Core i7-5930K CPU @ 3.50GHz
8*4 MB RAM, 2133 MHz

All the deep-learning topic is still quite new for me (I am a Master 1 student) so don't hesitate to detail your answer.
I followed the tutorial available on http://caffe.berkeleyvision.org/tutorial/ and I installed the py-faster-rcnn from this

Thank you for your answers,

Julien Girard
Reply all
Reply to author
Forward
0 new messages