NXP eIQ Machine Learning User Manual

eIQ MACHINE LEARNING SOFTWARE DEVELOPMENT ENVIRONMENT
eIQ Machine Learning (ML) software development environment leverages inference engines, neural network compilers, optimized libraries, deep learning toolkits and open-source technologies for easier, more secure system-level application development and ML algorithm enablement, and auto-quality ML enablement.
FACT SHEET
eIQ™ ML SOFTWARE
The NXP environment provides the key ingredients to do inference with neural network (NN) models on embedded systems and deploy ML algorithms on NXP microprocessors and microcontrollers for edge nodes. It includes inference engines, NN compilers, libraries, and hardware abstraction layers that support Google TensorFlow Lite, Glow, Arm NN, Arm CMSIS-NN, and OpenCV.
With NXP’s i.MX applications processors and i.MX RT crossover processors based on Arm Cortex respectively, embedded designs can now support deep learning applications that require high-performance data analytics and fast inferencing.
eIQ software includes a variety of application examples that demonstrate how to integrate neural networks into voice, vision and sensor applications. The developer can choose whether to deploy their ML applications on Arm Cortex A, Cortex M, and GPUs, or for high-end acceleration on the neural processing unit of the i.MX 8M Plus.
APPLICATIONS
eIQ ML software helps enable a variety of vision and sensor applications working in conjunction with a collection of device drivers and functions for cameras, microphones and a wide range of environmental sensor types.
• Object detection and recognition
®
eIQ (“edge intelligence”) ML software
®
-A and M cores,
®
• Voice command and keyword recognition
• Anomaly detection
• Image and video processing
• Other AI and ML applications include:
– Smart wearables
– Intelligent factories and smart buildings
– Healthcare and diagnostics
– Augmented reality
– Logistics
– Public safety
FEATURES
• Open-source inference engines
• Neural network compilers
• Optimized libraries
• Application samples
• Included in NXP’s Yocto Linux software releases
®
BSP and MCUXpresso SDK
NXP eIQ MACHINE LEARNING SOFTWARE - INFERENCE ENGINES BY CORE
®
NEURAL NETWORK
eIQ™ Inference Engine Deployment (Public version; Subject to Change; 7/6/20)
NXP eIQ Inference Engines and Libraries
CMSIS-NN
Compute Engines Cortex-M DSP Cortex-A GPU NPU
i.MX 8M Plus --- --- --- --- --- --- i.MX 8QM --- --- --- NA NA i.MX 8QXP --- --- --- NA NA i.MX 8M Quad/Nano --- --- --- NA NA i.MX 8M Mini --- --- --- NA NA NA NA i.MX RT600 --- --- NA NA NA NA NA NA NA NA i.MX RT1050/1060 NA NA NA NA NA NA NA NA NA
NA = Not Applicable
--- = Not Supported
OPEN-SOURCE INFERENCE ENGINES
The following inference engines are included as part of the eIQ ML software development kit and serve as options for deploying trained NN models.
Arm NN INFERENCE ENGINE
eIQ ML software supports Arm NN SDK on the i.MX 8 series applications processor family and is available through the NXP Yocto Linux-based releases.
Arm NN SDK is open-source, inference engine software that allows embedded processors to run trained deep learning models. This tool utilizes the Arm Compute Library to optimize neural network operations running on Cortex-A cores (using Neon acceleration). NXP has also integrated Arm NN with proprietary drivers to support the i.MX GPUs and i.MX 8M Plus NPU.
eIQ SOFTWARE FOR Arm NN
Neural Network (NN) Frameworks
GLOW
eIQ ML software supports Glow neural network compiler on the i.MX RT crossover MCU family and is available in the MCUXpresso SDK.
Glow is a machine learning compiler that enables ahead­of-time compilation for increased performance and smaller memory footprint as compared to a traditional runtime inference engine. NXP offers optimizations for its i.MX RT crossover MCUs based on Cortex-M cores and Cadence Tensilica
®
HiFi 4 DSP.
®
eIQ FOR GLOW NEURAL NETWORK COMPILER
Host machine Target machine
Neural Network
Model
Model Design and
Training, PC or Cloud
Pre-Trained Model Standard Formats
GLOW AOT NN Compiler
• If available, generates ‘external function calls’ to CMSIS-NN kernels or NN Library
• Otherwise it compiles code from its own native library
Model Optimization Model Compression
Model Compilation
Deploy
executable
code
Arm® Cortex®-M Tensilica® HiFi 4 DSP
i.MX RT
Inference
Arm® NN
Cortex-A CPU
www.nxp.com/eiqwww.nxp.com/eiq 2
Arm Compute Library
Hardware Abstraction
Layer and Drivers
Verisilicon GPU and
Neural Processing unit
Loading...
+ 2 hidden pages