Spresense development board runs AI apps with Sony’s Neural Network Console
Despite its compact form factor, the Spresense development board packs a hexacore microcontroller with a clock speed of 156 MHz. In a microcontroller context, this relatively large computing power makes Spresense able to run trained AI models through its Deep Neural Network Runtime library (DNNRT). Typical AI tasks could be recognition of images, sounds or gestures – areas that traditionally require processing power.
Another aspect of Spresense compared to other boards with high computing abilities, is that it’s designed to be power efficient. This fact opens up for use cases where there aren’t any power outlets available or there is a need for mobility, leaving the only option to power up the unit from batteries. Hence, combining Spresense computing abilities with its power efficiency makes it a suitable platform for edge computing solutions to run AI models – completely independent of power cables.
Training an AI model for Spresense
To train an AI model there is a need for a large set of data with inputs and matched correct outputs. In general, the larger dataset (see picture below) the better learning for the model. Once you have a trained model, you can use it with any input data of the same type as you’ve trained it for (highlighted with the question mark box in the picture).
You can try this yourself by following the instructions in the Spresense developer guide for the DNNRT library. You don’t need any knowledge of how neural networks functions; simply follow the steps to train the image recognition example model and run it on Spresense using the Arduino IDE. Sony’s Neural Network Console displays in a visual graph how your model gets “smarter” in real-time as you run the training for it. It’s almost like watching it grow from the infant stage to the grown-up stage in a matter of minutes.
Read more: