Hello together,
I would like to use mobilenet
(ssd_mobilenet_v1_android_export.pb) in an Android App to classify objects/persons and to draw a bounding box around them. I have already found a Tensorflow Mobile example app (
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android ) which fulfills my requirements. But the detection and bounding box calculation takes 200 ms. With 200 ms a refer to the runtime of the native method run in the TensorFlowInferenceInterface.class.
Apart from that I found a tensorflow lite example app (
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo) which can perform an image classification (no bounding box) with the model mobilenet_v1_1.0_224.tflite within 60ms. The 60ms refer to the runtime of the native method run() in the Interpreter.class that is called from the runInference() methode of the ImageClassifierFloatMobilenet.java.
After reading the tensorflow lite docu about model conversion (
https://www.tensorflow.org/lite/convert/cmdline_examples) I had the idea if the reason for that huge performance issue was a model that is not converted correctly. So I tried to convert ssd_mobilenet_v1_android_export.pb to a .tflite file with multiple outputs. My questions are:
1) if I need a classification and a bounding box from mobilenet with which output_arrays parameters do I have to convert the model?
2) Are there any other reasons why tf-mobile is so much slower than tflite? Are the protobuf files the reason?
Here is the example code from the tflite docu about model conversion: I don't know how to convert mobilenet for bounding box detection and classification.