Using TF Lite Interpreter in Android

709 views
Skip to first unread message

Andika Tanuwijaya

unread,
Dec 31, 2017, 9:16:18 PM12/31/17
to Discuss
Hi, I am trying to use my own model for inference in android. I am using the project under https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo as baseline and modified the ImageClassifier class to be something like this.

  String classifyFrame(Bitmap bitmap) {
    if (tflite == null) {
      Log.e(TAG, "Image classifier has not been initialized; Skipped.");
      return "Uninitialized Classifier.";
    }
    convertBitmapToByteBuffer(bitmap);
    // Here's where the magic happens!!!
    long startTime = SystemClock.uptimeMillis();
    tflite.run(imgData, outData);
    long endTime = SystemClock.uptimeMillis();
    Log.d(TAG, "Timecost to run model inference: " + Long.toString(endTime - startTime));
//    String textToShow = printTopKLabels();
//    textToShow = Long.toString(endTime - startTime) + "ms" + textToShow;
    String textToShow = "";
    return textToShow;
  }

But when I run it on my phone, I always got this exception and the app is forced closed:
01-01 07:45:53.678 7731-7757/android.example.com.tflitecamerademo E/AndroidRuntime: FATAL EXCEPTION: CameraBackground
                                                                                    Process: android.example.com.tflitecamerademo, PID: 7731
                                                                                    java.lang.IllegalArgumentException: Invalid handle to Interpreter.
                                                                                        at org.tensorflow.lite.NativeInterpreterWrapper.getInputDims(Native Method)
                                                                                        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:82)
                                                                                        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:112)
                                                                                        at org.tensorflow.lite.Interpreter.run(Interpreter.java:93)
                                                                                        at com.example.android.tflitecamerademo.ImageClassifier.classifyFrame(ImageClassifier.java:111)
                                                                                        at com.example.android.tflitecamerademo.Camera2BasicFragment.classifyFrame(Camera2BasicFragment.java:666)
                                                                                        at com.example.android.tflitecamerademo.Camera2BasicFragment.access$900(Camera2BasicFragment.java:71)
                                                                                        at com.example.android.tflitecamerademo.Camera2BasicFragment$5.run(Camera2BasicFragment.java:560)
                                                                                        at android.os.Handler.handleCallback(Handler.java:751)
                                                                                        at android.os.Handler.dispatchMessage(Handler.java:95)
                                                                                        at android.os.Looper.loop(Looper.java:154)
                                                                                        at android.os.HandlerThread.run(HandlerThread.java:61)

The frozen model is generated using below python API.

import tensorflow as tf

import tempfile
import subprocess

tf.contrib.lite.tempfile = tempfile
tf.contrib.lite.subprocess = subprocess

tf.reset_default_graph()
eval_graph = tf.Graph()

config = tf.ConfigProto()
config.gpu_options.allow_growth = True

with eval_graph.as_default() as g, tf.Session(config=config, graph=eval_graph) as sess:
    tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], "models")
    inputs = g.get_tensor_by_name("inputs:0")
    outputs = g.get_tensor_by_name("outputs:0")

    tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [inputs], [outputs])
    open("test.tflite", "wb").write(tflite_model)


Do I need to use runForMultipleInputsOutputs instead of run since during conversion I used array of inputs and outputs? Thanks in advance

Andika Tanuwijaya

unread,
Jan 1, 2018, 1:24:34 AM1/1/18
to Discuss
is it possible that the generated lite file is broken? The model itself is Johnson's style transfer architecture (with padding) with 6+ mb normal size but the generated lite file is 50-ish kb which is actually suspicious, I thought the convertion was successful since when I run the convertion script I only got these messages.

2018-01-01 13:22:15.033533: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA

Andika Tanuwijaya

unread,
Jan 1, 2018, 2:43:06 AM1/1/18
to Discuss
Nevermind, I did not realize that I need to freeze the graph first, I thought the python API would handle that automatically, now I got the expected not supported operations during conversion using toco

Bartłomiej Hołota

unread,
Mar 7, 2018, 9:03:56 AM3/7/18
to Discuss
I'm getting same error as yours in very simple model: https://gist.github.com/bholota/f67bcc9b51c079486f9c2322e53f8861. Could you share your fix?

Agathi Tam

unread,
Feb 27, 2020, 12:12:37 AM2/27/20
to Discuss
Hi,
 I am getting the same output for any input(image)  using TFLITE (java) , How to overcome the issue
Reply all
Reply to author
Forward
0 new messages