how to input the variables LSTM model?

731 views
Skip to first unread message

mateoricfarm

unread,
Sep 29, 2021, 3:14:40 AM9/29/21
to TensorFlow Lite
Hello Everyone,

I am studying "TFLITE" to implement ML on my device.
I am facing the problem that inputs for TFLITE model.
In python code, it is shown that LSTM input shape is 10, 1
But How am I suppose to input on C++ code into TFLITE model?
Every search result is 1, 1 input_shape is explained.
Thanks for your reply in advance.

-------Python Code to implement ML model-------
.....
model_x = Sequential()
model_x.add(LSTM(20, input_shape=(10,1)))
model_x.add(Dense(1))
model_x.compile(loss='mean_squared_error', optimizer='adam')

model_x.summary()

-------C++Code to implement TFLITE model-------
int main() {
    std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model_x.tflite");
    TFLITE_MINIMAL_CHECK(model != nullptr);
    
    // Build the interpreter with the InterpreterBuilder.
    // Note: all Interpreters should be built with the InterpreterBuilder,
    // which allocates memory for the Interpreter and does various set up
    // tasks so that the Interpreter can read the provided model.
    tflite::ops::builtin::BuiltinOpResolver resolver;
    tflite::InterpreterBuilder builder(*model, resolver);
    std::unique_ptr<tflite::Interpreter> interpreter;
    builder(&interpreter);
    TFLITE_MINIMAL_CHECK(interpreter != nullptr);

    // Allocate tensor buffers.
    TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);

    // Fill input buffers
    // TODO(user): Insert code to fill input tensors.
    // Note: The buffer of the input tensor with index `i` of type T can
    // be accessed with `T* input = interpreter->typed_input_tensor<T>(i);`
    interpreter->typed_input_tensor<float>(0)[0] = 0.503892;

    // vector<float> inputs = {    0.503892, 0.50349951,
    //                             0.50574788, 0.50799569,
    //                             0.49414472, 0.49407101,
    //                             0.50661932, 0.49726027,
    //                             0.50289517, 0.48882814 };
        
    const vector<int>& t_inputs = interpreter->inputs();
    TfLiteTensor* tensor = interpreter->tensor(t_inputs[0]);

    int input_size = tensor->dims->size;
    int batch_size = tensor->dims->data[0];
    int h = tensor->dims->data[1];
    int w = tensor->dims->data[2];
    int channels = tensor->dims->data[3];

    fprintf(stderr, "%d, %d, %d, %d, %d\n",
        input_size, batch_size, h, w, channels);  // result 3, 1, 10, 1, 0

    // Run inference
    TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk);
    // printf("\n\n=== Post-invoke Interpreter State ===\n");
    // tflite::PrintInterpreterState(interpreter.get());

    // Read output buffers
    // TODO(user): Insert getting data out code.
    // Note: The buffer of the output tensor with index `i` of type T can
    // be accessed with `T* output = interpreter->typed_output_tensor<T>(i);`
    float* result = interpreter->typed_output_tensor<float>(0);
    fprintf(stderr, "predict : %f\n", *result);

    return 0;
}


Yu-Cheng Ling

unread,
Sep 30, 2021, 5:35:06 PM9/30/21
to mateoricfarm, TensorFlow Lite
Hi, 

Thanks for reaching out!

Please note that the shape parameters in Keras API are omitting the batch dimension. 
The following code in your snippet

model_x.add(LSTM(20, input_shape=(10,1)))
model_x.add(Dense(1))

... will produce a model with input [None, 10, 1] and output [None, 1] in the equivalent TensorFlow & TensorFlow Lite models. 
The 1st dimension is the batch size, and None means it can be changed. 

For your C++ code piece

    int input_size = tensor->dims->size;
    int batch_size = tensor->dims->data[0];
    int h = tensor->dims->data[1];
    int w = tensor->dims->data[2];
    int channels = tensor->dims->data[3];
    fprintf(stderr, "%d, %d, %d, %d, %d\n",
        input_size, batch_size, h, w, channels);  // result 3, 1, 10, 1, 0

Please note tensor->dims->size is 3, so accessing tensor->dims->data[3]is out of boundary. 
The proper input tensor is [1, 10, 1], which is consistent with my comment above (with batch size = 1). 
You can fill the data into the input tensor with either interpreter->typed_input_tensor or interpreter->tensor functions (using memcpy or a simple loop). 

Hope these help. 

Best,
YC



--
You received this message because you are subscribed to the Google Groups "TensorFlow Lite" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tflite+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tflite/366944b7-94d4-4093-8df4-34a7ab3de4acn%40tensorflow.org.

mateoricfarm

unread,
Sep 30, 2021, 8:46:58 PM9/30/21
to TensorFlow Lite, Yu-Cheng Ling, TensorFlow Lite, mateoricfarm
Thanks for the reply.

I've tried a simple loop as the code described below.
it seems the output value is just the same as an input value.
Could you let me know how to implement code to predict the 11th value?
 
-------C++Code to implement TFLITE model-------
  vector<float> inputs = {    0.503892, 0.50349951,
                              0.50574788, 0.50799569,
                              0.49414472, 0.49407101,
                              0.50661932, 0.49726027,
                              0.50289517, 0.48882814 };
  
  vector<float>::iterator iter;
  for (iter = inputs.begin(); iter != inputs.end(); iter++){
    interpreter->typed_input_tensor<float>(0)[0] = *iter;
    TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk);
    float* result = interpreter->typed_output_tensor<float>(0);
    fprintf(stderr, "predict : %f\n", *result);
  }

-------Execution Result-------
predict : 0.503892
predict : 0.503500
predict : 0.505748
predict : 0.507996
predict : 0.494145
predict : 0.494071
predict : 0.506619
predict : 0.497260
predict : 0.502895
predict : 0.488828
2021년 10월 1일 금요일 오전 6시 35분 6초 UTC+9에 Yu-Cheng Ling님이 작성:

mateoricfarm

unread,
Oct 1, 2021, 12:07:20 AM10/1/21
to TensorFlow Lite, mateoricfarm, Yu-Cheng Ling, TensorFlow Lite
Dear Everyone,

I'd changed the C++ code as YC mentioned earlier.
Please refer to the code snippet described below.
the execution result is the same as the first value.
What am I missing to get a proper predicted value?

-------C++Code to implement TFLITE model-------
  float inputs [1][10][1]; // make same shape according to model inputs size.

  inputs[0][0][0] = 0.503892;
  inputs[0][1][0] = 0.50349951;
  inputs[0][2][0] = 0.50574788;
  inputs[0][3][0] = 0.50799569;
  inputs[0][4][0] = 0.49414472;
  inputs[0][5][0] = 0.49407101;
  inputs[0][6][0] = 0.50661932;
  inputs[0][7][0] = 0.49726027;
  inputs[0][8][0] = 0.50289517;
  inputs[0][9][0] = 0.48882814;
  
  memcpy(interpreter->typed_output_tensor<float>(0), inputs, sizeof(inputs)); // memcpy to set input values
  TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk);
  float* result = interpreter->typed_output_tensor<float>(0);
  fprintf(stderr, "predict value: %f\n", *result); // predict value: 0.503892
-------C++Code Ends-------

I tested the same tflite model with python code to see if the model has expected behavior.
and confirmed that the model has been inferences as I expected.
-------Python Code to implement ML model-------
  # Load the TFLite model and allocate tensors.
  interpreter = tf.lite.Interpreter(model_path="model/model_x.tflite")
  interpreter.allocate_tensors()

  # Get input and output tensors.
  input_details = interpreter.get_input_details()
  output_details = interpreter.get_output_details()

  # Test the model on random input data.
  input_shape = input_details[0]['shape']
  interpreter.set_tensor(input_details[0]['index'], [acc_x_tt[0]])

  interpreter.invoke()

  # The function `get_tensor()` returns a copy of the tensor data.
  # Use `tensor()` in order to get a pointer to the tensor.
  output_data = interpreter.get_tensor(output_details[0]['index'])
  print(output_data) // 0.48679245
2021년 10월 1일 금요일 오전 9시 46분 58초 UTC+9에 mateoricfarm님이 작성:

Yu-Cheng Ling

unread,
Oct 1, 2021, 7:33:38 PM10/1/21
to mateoricfarm, TensorFlow Lite
It seems to me you have a typo in this line

    memcpy(interpreter->typed_output_tensor<float>(0), inputs, sizeof(inputs)); // memcpy to set input values

typed_output_tensor should be typed_input_tensor. Isn't it?

Best,
YC

mateoricfarm

unread,
Oct 1, 2021, 7:45:15 PM10/1/21
to TensorFlow Lite, Yu-Cheng Ling, TensorFlow Lite, mateoricfarm
Thanks for the reply.

There is a typo as you mentioned.
So, I corrected it and ran again. but I got the same result.
I attached the whole C++ code.

-------C++Code to implement TFLITE model-------
#include <cstdio>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"

using namespace std;

#define TFLITE_MINIMAL_CHECK(x)                              \
  if (!(x)) {                                                \
    fprintf(stderr, "Error at %s:%d\n", __FILE__, __LINE__); \
    exit(1);                                                 \
  }

int main() {
  std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model_x.tflite");
  TFLITE_MINIMAL_CHECK(model != nullptr);
  
  tflite::ops::builtin::BuiltinOpResolver resolver;
  tflite::InterpreterBuilder builder(*model, resolver);
  std::unique_ptr<tflite::Interpreter> interpreter;
  builder(&interpreter);
  TFLITE_MINIMAL_CHECK(interpreter != nullptr);

  // Allocate tensor buffers.
  TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);

  float inputs [1][10][1];

  inputs[0][0][0] = 0.503892;
  inputs[0][1][0] = 0.50349951;
  inputs[0][2][0] = 0.50574788;
  inputs[0][3][0] = 0.50799569;
  inputs[0][4][0] = 0.49414472;
  inputs[0][5][0] = 0.49407101;
  inputs[0][6][0] = 0.50661932;
  inputs[0][7][0] = 0.49726027;
  inputs[0][8][0] = 0.50289517;
  inputs[0][9][0] = 0.48882814;
  
  memcpy(interpreter->typed_input_tensor<float>(0), inputs, sizeof(inputs));
  TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk);
  float* result = interpreter->typed_output_tensor<float>(0);
  fprintf(stderr, "predict value: %f\n", *result); // predict value: 0.503892

  return 0;
}


2021년 10월 2일 토요일 오전 8시 33분 38초 UTC+9에 Yu-Cheng Ling님이 작성:

mateoricfarm

unread,
Oct 5, 2021, 9:31:26 PM10/5/21
to TensorFlow Lite, mateoricfarm, Yu-Cheng Ling, TensorFlow Lite
Dear All,

I've made some code with the same structure, but different inputs to make the issue a bit understandable more.
Could anyone tell me why the inference model keeps giving me a result as the first value of input?

--------Python Code to implement tflite model--------
from numpy import array
from keras.models import Sequential
from keras.layers import Dense, LSTM

x = array(
        [
            [1,2,3,4,5,6,7,8,9,10],
            [2,3,4,5,6,7,8,9,10,11],
            [3,4,5,6,7,8,9,10,11,12],
            [4,5,6,7,8,9,10,11,12,13],
            [5,6,7,8,9,10,11,12,13,14],
            [6,7,8,9,10,11,12,13,14,15],
            [7,8,9,10,11,12,13,14,15,16],
            [8,9,10,11,12,13,14,15,16,17],
            [9,10,11,12,13,14,15,16,17,18],
            [10,11,12,13,14,15,16,17,18,19],
            [11,12,13,14,15,16,17,18,19,20],
            [12,13,14,15,16,17,18,19,20,21],
            [13,14,15,16,17,18,19,20,21,22]
        ])
y = array([55,65,75,85,95,105,115,125,135,145,155,165,175])

x = x.reshape((x.shape[0], 10, 1))

model= Sequential()
model.add(LSTM(20, input_shape=(10,1)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')

from keras.callbacks import EarlyStopping

early_stop = EarlyStopping(monitor='loss', patience=10, verbose=1, mode='auto')
model_x.fit(x, y, epochs=10000, batch_size=1, verbose=2, callbacks=[early_stop])

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model_x)
tflite_model = converter.convert()
with tf.io.gfile.GFile('model/lstm_add.tflite', 'wb') as f:
    f.write(tflite_model)
--------Python Code Ends--------

--------C++ Code to inference model--------
#include <cstdio>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"

using namespace std;

#define TFLITE_MINIMAL_CHECK(x)                              \
  if (!(x)) {                                                \
    fprintf(stderr, "Error at %s:%d\n", __FILE__, __LINE__); \
    exit(1);                                                 \
  }

int main() {
  std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model/lstm_add.tflite");
  TFLITE_MINIMAL_CHECK(model != nullptr);
  
  tflite::ops::builtin::BuiltinOpResolver resolver;
  tflite::InterpreterBuilder builder(*model, resolver);
  std::unique_ptr<tflite::Interpreter> interpreter;
  builder(&interpreter);
  TFLITE_MINIMAL_CHECK(interpreter != nullptr);

  // Allocate tensor buffers.
  TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
 
  float inputs [1][10][1];

  inputs[0][0][0] = 14;
  inputs[0][1][0] = 15;
  inputs[0][2][0] = 16;
  inputs[0][3][0] = 17;
  inputs[0][4][0] = 18;
  inputs[0][5][0] = 19;
  inputs[0][6][0] = 20;
  inputs[0][7][0] = 21;
  inputs[0][8][0] = 22;
  inputs[0][9][0] = 23;
  
  float* input = interpreter->typed_input_tensor<float>(0);

  memcpy(input, inputs, sizeof(inputs));
  TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk);
  float* result = interpreter->typed_output_tensor<float>(1);
  fprintf(stderr, "predict value: %f\n", *result); // predict value: 14.000000

  return 0;
}
--------C++ Code Ends--------

2021년 10월 2일 토요일 오전 8시 45분 15초 UTC+9에 mateoricfarm님이 작성:
Reply all
Reply to author
Forward
0 new messages