Simulating a TFLM inference on ModelSim

60 views
Skip to first unread message

Joan Mihali

unread,
Oct 11, 2021, 10:05:17 AM10/11/21
to SIG Micro
Hello everybody,

I am currently working on the development of a MCU architecture that is based on the CV32E40P core. In the process, we are simulating example applications using the QuestaSim simulator.

At the moment, we are trying to simulate a simple Tensorflow-Lite Micro inference program on the system. For that, I would like to compile TFLM as a static library, in order to link it to the object of the inference program and create an ELF executable. The ELF will be read during the simulation and provide the inputs to the system.

The question is, how do I compile TFLM as a static library? Online I found some instructions on how to compile TFLM as a shared library, but a shared library would make our task too complicated.

Cheers!
Joan



Advait Jain

unread,
Oct 15, 2021, 2:05:02 PM10/15/21
to Joan Mihali, SIG Micro
Hi Joan,

You can compile a static library via the microlite target.

For example:
make -f tensorflow/lite/micro/tools/make/Makefile microlite
will create a static lib for x86:
tensorflow/lite/micro/tools/make/gen/linux_x86_64_default/lib/libtensorflow-microlite.a

Having said that, TFLM does not have any notion of public headers. So, to use this static library, you would likely have to also import the full TFLM tree.

My guess is that you might have more success building TFLM from source for your specific target by following the new platform support documentation.

Regards,
Advait


--
You received this message because you are subscribed to the Google Groups "SIG Micro" group.
To unsubscribe from this group and stop receiving emails from it, send an email to micro+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/micro/44e23a2b-8c6f-40b2-9e08-9754478b40edn%40tensorflow.org.
Reply all
Reply to author
Forward
0 new messages