TFLite BERT: DataType error: cannot resolve DataType of [Ljava.lang.Object

291 views
Skip to first unread message

Liuyi Jin

unread,
Dec 7, 2021, 12:51:51 AM12/7/21
to TensorFlow Lite
Hi Everyone,

I have converted a customized BERT classifier model into a pb model and a further TFLite model. Now when I followed the official BERT QA example trying to run my customized TFLite model on an android phone, I got the datatype error shown in the attached picture.

Screenshot from 2021-12-06 23-47-14.png

I am wondering the correct way of converting a BERT checkpoint into TFLite model. In the serving function of converting BERT checkpoint to a pb file, I put a dict there. In this case, for the inference on android, how should I organize the input to the BERT model?

Thanks,
Liuyi

Lu Wang

unread,
Dec 7, 2021, 12:02:04 PM12/7/21
to Liuyi Jin, TensorFlow Lite
Hi Liuyi,

It will be helpful if you can attach your conversion script, the tflite model, and your inference code. From the error message, it seems like you feed the wrong input data type, i.e. an object, but `Interpreter` expects a float/int32/int64/uint8/ array or a bytebuffer.

Best,
Lu

--
You received this message because you are subscribed to the Google Groups "TensorFlow Lite" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tflite+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tflite/4f2650f5-98ab-4fc6-a729-bf3a8b9fb30an%40tensorflow.org.

Liuyi Jin

unread,
Dec 7, 2021, 12:37:09 PM12/7/21
to TensorFlow Lite, Lu Wang, TensorFlow Lite, Liuyi Jin
Hi Lu,

Thanks for your reply.

Here is my pb model conversion python code:

# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
    use_tpu=FLAGS.use_tpu,
    model_fn=model_fn,
    config=run_config,
    params=estimator_params,
    train_batch_size=FLAGS.train_batch_size,
    eval_batch_size=FLAGS.eval_batch_size,
    predict_batch_size=FLAGS.predict_batch_size)

# define the input function
def serving_input_fn():

    input_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_ids')
    input_mask = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_mask')
    segment_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='segment_ids')

    input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn({
     'input_ids': input_ids,
     'input_mask': input_mask,
     'segment_ids': segment_ids})()
      return input_fn

estimator._export_to_tpu = False
estimator.export_savedmodel(FLAGS.export_dir, serving_input_fn, checkpoint_path=FLAGS.                  init_checkpoint)
print("exported pb files can be found in /" + FLAGS.export_dir)

Here is my pb2tflite conversion python code:

def pb2tflite():
    saved_model_dir = os.path.join(FLAGS.export_dir, str(cfg['bert_eval']['saved_model_ts']))
    converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
#    converter = tf.contrib.lite.TFLiteConverter.from_saved_model(saved_model_dir)

    converter.target_spec.supported_ops = [
        tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
        tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
    ]

#    converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(saved_model_dir)
#    converter = tf.compat.v1.lite.TocoConverter.from_saved_model(saved_model_dir) # path to the SavedModel          directory
    print("start converting...")
    tflite_model = converter.convert()
    print("converting finished! start writing the tflite model")

    if not tf.io.gfile.exists(FLAGS.tflite_dir):
        tf.io.gfile.mkdir(FLAGS.tflite_dir)

    # Save the tflite model.
#    tflite_file = os.path.join(FLAGS.tflite_dir, 'model.tflite')
    tflite_file = os.path.join(FLAGS.tflite_dir, FLAGS.model_dir + '.tflite')
    with open(tflite_file, 'wb') as f:
        f.write(tflite_model)

I've attached my tflite inference java file: (basically I am referring to this tensorflow example)
My tflite model is around 500MB, I attached with google drive link here. I guess you should have access already to it.

My confusion is that to convert the tensorflow model, I need to feed a dict there. But in the inference, dict is not the basic data type in java, how can I feed basic data in the java? Should I use more convenient way like tflite_maker released by tensorflow to achieve what I want. Thanks.

Best,
Liuyi
QaClient.java

Khanh LeViet

unread,
Dec 7, 2021, 4:11:23 PM12/7/21
to Liuyi Jin, TensorFlow Lite, Lu Wang
I think the easiest way to train a BERT text classifier is to use Model Maker. There are some graph modifications required when converting the BERT model to TFLite that are non-trivial to reproduce.
Besides, if you want to use a model in an Android app, you should use MobileBERT instead of the original BERT because the original BERT is too large.

Then if you train a BERT/MobileBERT text classification model with Model Maker, integrating it to Android can be done in a few lines of code using Task Library.

Hope this helps.





--
    
Le Viet Gia Khanh (カン)
TensorFlow Developer Advocate


Shibuya Stream 
3-21-3 Shibuya, Shibuya-ku, Tokyo
150-0002, Japan

Liuyi Jin

unread,
Dec 7, 2021, 4:48:39 PM12/7/21
to TensorFlow Lite, Khanh LeViet, TensorFlow Lite, Lu Wang, Liuyi Jin
Hi Khanh,

Thank you so much for your information. Actually, I am looking at the Model Maker of BERT, and trying to use it to deploy a BERT model onto my android phone. However, we need a customized BERT by training from the original BERT checkpoint. So we cannot just use mobileBERT directly. 

I first build a customized BERT, second compress the customized BERT, and third deploy the compressed customized BERT onto my android phone. I'm not sure if I'm thinking correctly or if I can directly get a customized mobileBert from the original MobileBert. This latter way may save my effort in compressing the customized BERT. Can I get a customized mobileBert by training from the original mobileBert directly? if so, where can I find relevant resources about this? Thank you.

Best,
Liuyi

Khanh LeViet

unread,
Dec 7, 2021, 5:03:19 PM12/7/21
to Liuyi Jin, TensorFlow Lite, Lu Wang
You can take a look here to learn how to train MobileBERT directly without using Model Maker.

However it only contains documentation on how to train a MobileBERT for the QA task. If you need to do a text classification task, you'll need to look at the source code of Model Maker and try to replicate what it does.

Liuyi Jin

unread,
Dec 7, 2021, 5:13:21 PM12/7/21
to TensorFlow Lite, Khanh LeViet, TensorFlow Lite, Lu Wang, Liuyi Jin
Thank you so much, Khanh. This may help me out. I will check this out.
Reply all
Reply to author
Forward
0 new messages