Cannot convert a symbolic Keras input/output to a numpy array with TF Lite converter

75 views
Skip to first unread message

James B.

unread,
Jul 17, 2021, 5:39:59 AM7/17/21
to TensorFlow Lite
I need to implement a custom convolution 2D layer which is not supported by Tensorflow and TF Lite, so I tried to define it by using this link: https://www.tensorflow.org/guide/create_op (in order to have a TF operator for the op) and this link: https://www.tensorflow.org/lite/guide/ops_custom (to have a TF Lite operator for this op). I can train the model; however, when I try to convert the model with the operator with TF Lite converter, I get this error:

TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

The example code is like this:

import tensorflow as tf
tf.config.run_functions_eagerly(True)
from keras.datasets import mnist
from keras.models import Model
from keras.layers import add,Input,Activation,Flatten,Dense

def convol(inp):
  conv_module = tf.load_op_library('./conv.so')
  x = conv_module.conv(inp, name="Conv")
  return x

def read_mnist(path):
    (train_x,train_y), (test_x,test_y)=mnist.load_data()
    return train_x,train_y,test_x,test_y

def tcn(train_x,train_y,test_x,test_y):
    inp=Input(shape=(28,28))
    x = convol(inp)
    x=Flatten()(x)
    x=Dense(10,activation='softmax')(x)
    model=Model(inputs=inp,outputs=x)
    model.summary()
    model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=     
    ['accuracy'])
    model.fit(train_x,train_y,batch_size=100,epochs=10,validation_data=(test_x,test_y))       
    pred=model.evaluate(test_x,test_y,batch_size=100)
    print('test_loss:',pred[0],'- test_acc:',pred[1])

    train_x,train_y,test_x,test_y=read_mnist('MNIST_data')
    tcn(train_x,train_y,test_x,test_y)
    tflite_model_name = 'net'
    inp=Input(shape=(28,28))
    converter =    tf.lite.TFLiteConverter.from_concrete_functions([tf.function(convol).get_concrete_function(inp)])
    converter.allow_custom_ops = True
    tflite_model = converter.convert()
    open(tflite_model_name + '.tflite', 'wb').write(tflite_model)

I think the problem is due to the fact that TF Lite converter receives a Keras Tensor (a placeholder) rather than an array numpy, but I'm not sure how I could convert the model if I have to use a TF Lite custom op.

Thanks in advance.


Sachin Joglekar

unread,
Jul 28, 2021, 4:45:50 PM7/28/21
to TensorFlow Lite, James B., Jaesung Chung, Karim Nosseir
Jaesung/Karim could you take a look?

Jaesung Chung

unread,
Jul 28, 2021, 10:53:19 PM7/28/21
to Sachin Joglekar, TensorFlow Lite, James B., Karim Nosseir
To workaround Keras/Num py issues, instead of the concrete function converter API, is it possible to export the model as a saved model and try the TFLiteConverter.from_saved_model API instead?

Cristina Augello

unread,
Jul 30, 2021, 2:49:55 AM7/30/21
to Jaesung Chung, Sachin Joglekar, TensorFlow Lite, Karim Nosseir
Thanks for the help. I have tried to export the model as a saved model: now TFLite converter says that the op isn't supported (as expected). I'll try to register it.
Reply all
Reply to author
Forward
0 new messages