Skip to content

Latest commit

 

History

History
73 lines (53 loc) · 2.34 KB

File metadata and controls

73 lines (53 loc) · 2.34 KB

How to integrate Neuton into your firmware project

Include header file

Copy all files from this archive to your project and include header file:

#include "neuton.h"

The library contains functions to get model information such as:

  • task type (regression, classification, etc.);
  • neurons and weights count;
  • window buffer size;
  • input and output features count;
  • model size and RAM usage;
  • float support flag;
  • quantization level.

Main functions are:

  • neuton_model_set_inputs - to set input values;
  • neuton_model_run_inference - to make predictions.

Set input values

Make an array with model inputs. Inputs count and order should be the same as in the training dataset.

input_t inputs[] = {
    feature_0,
    feature_1,
    // ...
    feature_N
};

Pass this array to neuton_model_set_inputs function.

If digital signal processing option was selected on the platform, you should call neuton_model_set_inputs multiple times for each sample to fill internal window buffer. Function will return 0 when buffer is full, this indicates that model is ready for prediction.

Make prediction

When buffer is ready, you should call neuton_model_run_inference with two arguments:

  • pointer to index of predicted class;
  • pointer to neural net outputs (dimension of array can be read using neuton_model_outputs_count function).

For regression task output value will be stored at outputs[0]. For classification task index will contain class index with maximal probability, outputs will contain probabilities of each class. Thus, you can get predicted class probability at outputs[index].

Function will return 0 on successful prediction.

if (neuton_model_set_inputs(inputs) == 0)
{
    uint16_t index;
    float* outputs;
    
    if (neuton_model_run_inference(&index, &outputs) == 0)
    {
        // code for handling prediction result
    }
}

Map predicted results on the required values (for Classification task type)

Inference results are encoded (0…n). For mapping on your classes, use dictionaries binary_target_dict_csv.csv / multi_target_dict_csv.csv.

Integration with Tensorflow, ONNX

Neuton also offers additional options of integration and interaction with your model. This archive provides you with Tensorflow and ONNX formats of the model. You can find them in converted_models folder.