Deploying an AI model to IMX500
1. Convert
Convert your own quantized and trained deep neural network model into an optimized binary file, ready to package and deploy on IMX500. IMX500 developer tools include a converter to run on your own infrastructure or in the Smart Camera Managed App on the Azure cloud. The converter includes a simulator which evaluates the neural network’s performance on the SDSP and generates a KPI report. The converter can process a DNN and disregard the specific coefficient values. In this case, the network can be quantized, in floating point, or even not trained at all. If the required conversion is intended for deployment, however, then the input network must be trained and quantized to 8 bits uniform quantization.
You can deploy any type* of neural network model or, to get up and running even sooner for faster prototyping and evaluation, you can start with one of our pre-compiled networks listed below.
2. Package
The IMX500 packager converts binary files containing deep neural network (DNN) model data and IMX500 firmware and hardware configuration into a signed executable package that you can deploy to the IMX500 chip.
If you run the packager on your own infrastructure, you sign the IMX500 packages with your personal encryption key to ensure secure deployment.
3. Deploy
Deploy the package to IMX500 by flashing and running the package, either running IMX500 developer tools on your own infrastructure, or using the Smart Camera Managed App APIs running on Microsoft Azure cloud.
Pre-compiled networks available:
- Object detection: MobileNet SSD v1 (COCO)
- Image classification: MobileNet v1
Supported DNN trainers:
- TensorFlow Lite with FlatBuffer
- TensorFlow with frozen graph serialization, and a fixed input shape in the PlaceHolder
* Only inference graphs are supported in this version of the converter. Recurrent graphs are not supported yet.