Supported SoCs

SoC

RTL8721Dx

RTL8720E

RTL8726E

RTL8730E

Supported Kernel

KM4

KM4

KM4,DSP

CA32

Overview

TensorFlow Lite for Microcontrollers is an open-source library, it is a port of TensorFlow Lite designed to run machine learning models on DSPs, microcontrollers and other devices with limited memory.

Ameba-tflite-micro is a version of the TensorFlow Lite Micro library for Realtek Ameba SoCs with platform specific optimizations, and is available in ameba-rtos.

Links:

Build Tensorflow Lite Micro Library

To build Tensorflow Lite Micro Library, enable tflite_micro configuration in SDK menuconfig.

  1. Switch to gcc project directory

    cd {SDK}/amebadplus_gcc_project
    ./menuconfig.py
    
  2. Navigate through menu path to enable tflite_micro

    --------MENUCONFIG FOR General---------
    CONFIG TrustZone  --->
    ...
    CONFIG APPLICATION  --->
       GUI Config  --->
       ...
       AI Config  --->
          [*] Enable TFLITE MICRO
    

Build Examples

TensorFlow Lite for Microcontrollers related examples are in the {SDK}/component/example/tflite_micro directory.

To build an example image such as tflm_hello_world:

./build.py -a tflm_hello_world

Tutorial

MNIST Introduction

The MNIST database (Modified National Institute of Standards and Technology database) is a large collection of handwritten digits. In this tutorial, MNIST database is used to show a full workflow from training a model to deploying it and run inference on Ameba SoCs with tflite-micro.

Example codes are in the {SDK}/component/example/tflite_micro/tflm_mnist directory.

Note

Step 1-4 are for preparing necessary files on a development machine (server or PC etc.). You can skip them and use prepared files to build the image.

Step 1. Train a Model

Use keras(tensorflow) or pytorch to train a classification model for 10 digits of MNIST dataset. The example uses a simple convolution based model, it will train several epochs and then test accuracy.

  • Run script

    python keras_train_eval.py --output keras_mnist_conv
    
  • Due to the limited computation resources and memory of microcontrollers, it is recommended to pay attention to model size and operation numbers. In keras_train_eval.py, keras_flops library is used:

    from keras_flops import get_flops
    
    model.summary()
    flops = get_flops(model, batch_size=1)
    
  • After training, keras model is saved in SavedModel format in keras_mnist_conv folder.

Step 2. Convert to Tflite

In this stage, post-training integer quantization is applied on the trained model and output .tflite format. Float model inference is also supported on Ameba SoCs, however, it is recommended to use integer quantization which can extremely reduce computation and memory with little performance degradation.

Refer to tflite official site for more details about integer-only quantization.

  • Run script

    python convert.py --input-path keras_mnist_conv/saved_model --output-path keras_mnist_conv
    
  • In convert.py, tf.lite.TFLiteConverter is used to convert SavedModel into int8 .tflite given a representative dataset:

    converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
    converter.optimizations = [tf.lite.Optimize.DEFAULT]
    converter.representative_dataset = repr_dataset
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.inference_input_type = tf.int8
    converter.inference_output_type = tf.int8
    tflite_int8_model = converter.convert()
    
  • After conversion, the performance on test set will be validated using int8 .tflite model and two .npy files containing input array and label array of 100 test images are generated for later use on SoC.

Tip

In convert.py, onnx_tf library is used for converting from onnx to SavedModel. Since different model compatibility across libraries, other convert libraries can be used with similar purpose:

Step 3. Optimize Tflite and Convert to C++

  1. Use tflm_model_transforms tool from official tflite-micro repository can reduce .tflite size by running some TFLM specific transformations. It also re-align the tflite flatbuffer via the C++ flatbuffer api which can speed up inference on some Ameba platforms. This step is optional, but it is strongly recommended to run:

    git clone https://github.com/tensorflow/tflite-micro.git
    cd tflite-micro
    
    bazel build tensorflow/lite/micro/tools:tflm_model_transforms
    bazel-bin/tensorflow/lite/micro/tools/tflm_model_transforms --input_model_path=</path/to/my_model.tflite>
    
    # output will be located at: /path/to/my_model_tflm_optimized.tflite
    
  2. Convert .tflite model and .npy test data to .cc and .h files for deployment:

    python generate_cc_arrays.py models int8_tflm_optimized.tflite
    python generate_cc_arrays.py testdata input_int8.npy input_int8.npy label_int8.npy label_int8.npy
    

Step 4. Inference on SoC with Tflite-Micro

example_tflm_mnist.cc shows how to run inference with the trained model on test data, calculate accuracy, profile memory and latency.

Use netron to visualize the .tflite file and check the operations used by the model. Instantiate operations resolver to register and access the operations.

using MnistOpResolver = tflite::MicroMutableOpResolver<4>;

TfLiteStatus RegisterOps(MnistOpResolver& op_resolver) {
    TF_LITE_ENSURE_STATUS(op_resolver.AddFullyConnected());
    TF_LITE_ENSURE_STATUS(op_resolver.AddConv2D());
    TF_LITE_ENSURE_STATUS(op_resolver.AddMaxPool2D());
    TF_LITE_ENSURE_STATUS(op_resolver.AddReshape());
    return kTfLiteOk;
}

Refer to tflite-micro official site for more details about running inference with tflite-micro.

Step 5. Build Example

Follow steps in Build Tensorflow Lite Micro Library and Build Examples to build the example image.