Neural Network Accelerator Chip Enables AI In Battery-Powered Devices

This post was originally published on this site
Advertisement

The low power MCU performs AI tasks without compromising battery usage and removes latency to a great extent

AI technology allows machines to see and hear, making sense of the world in ways that were previously impractical. In the past, bringing AI inferences to the edge meant gathering data from sensors, cameras and microphones, sending that data to the cloud to execute an inference, then sending an answer back to the edge.

The architecture is innovative but very challenging for edge applications due to poor latency and energy performance. As an alternative, low-power microcontrollers can be used to implement simple neural networks; however, this allows only simple tasks to be run at the edge.

So, here’s presenting the low-power neural network accelerated microcontroller called MAX78000 that moves artificial intelligence (AI) to the edge without performance compromises in battery-powered internet of things (IoT) devices. Executing AI inferences at less than 1/100th the energy of software solutions dramatically improves run-time for battery-powered AI applications and enables complex new AI use cases previously considered impossible. These power improvements come with no compromise in latency or cost. The MAX78000 executes inferences 100x faster than software solutions running on low power microcontrollers – at a fraction of the cost of FPGA or GPU solutions.

Efficiency in AI Applications

By integrating a dedicated neural network accelerator with a pair of microcontroller cores, the MAX78000 overcomes these limitations, enabling machines to see and hear complex patterns with local, low-power AI processing that executes in real-time. Applications such as machine vision, audio and facial recognition can be made more efficient as the MAX78000 can execute inferences at less than 1/100th energy required by a microcontroller.

At the heart of the MAX78000 is hardware that minimises energy consumption and latency of convolutional neural networks (CNN). This hardware runs with minimal intervention from any microcontroller core, making operation extremely streamlined. To efficiently get data from the external world into the CNN engine, any one of the two integrated microcontroller cores: the ultra-low power Arm Cortex-M4 core, or the low power RISC-V core can be used.

Supporting Tool

As AI development can be challenging, tools such as the MAX78000EVKIT# can provide seamless evaluation and development experience. The MAX78000EVKIT# includes audio and camera inputs, and out-of-the-box running demos for large vocabulary keyword spotting and facial recognition.

Networks for the MAX78000 can be used for training networks with tools like TensorFlow or PyTorch.

Key Advantages

Low Energy: Hardware accelerator coupled with ultra-low-power Arm Cortex M4F and RISC-V microcontrollers moves intelligence to the edge at less than 1/100th the energy compared to closest competitive embedded solutions.
Low Latency: Performs AI functions at the edge to achieve complex insights, enabling IoT applications to reduce or eliminate cloud transactions and cuts latency over 100x compared to software.
High Integration: Low-power microcontroller with neural network accelerator enables complex, real-time insights in battery-powered IoT devices.

“Battery-powered IoT devices can now do much more than just simple keyword spotting. We’ve changed the game in the typical power, latency and cost tradeoff, and we’re excited to see a new universe of applications that this innovative technology enables,” said Kris Ardis, executive director for the Micros, Security and Software Business Unit at Maxim Integrated

The MAX78000 and the MAX7800EVKIT# are available from Maxim Integrated or its authorised distributors.


Advertisement