TinyML enables microcontrollers with AI machine learning capabilities

TinyML proves that small chips can also discover more possibilities. Instead of running complex machine learning (ML) models ON large, power-hungry cloud-based computers, this new approach runs optimized recognition models on end devices that consume no more than a few milliwatts of power. .

TinyML proves that small chips can also discover more possibilities. Instead of running complex machine learning (ML) models on large, power-hungry cloud-based computers, this new approach runs optimized recognition models on end devices that consume no more than a few milliwatts of power. .

Backed by Arm and industry leaders Google, Qualcomm and others, this emerging market segment tinyML has the potential to transform the way we process data in the Internet of Things (IoT), where billions of tiny devices are used to provide greater insights and enhance efficiencies in areas such as consumer, medical, automotive and industrial.

Why use TinyML on microcontrollers?

Microcontrollers such as the Arm Cortex-M series are ideal platforms for ML because they are widely used. They perform real-time computations quickly and efficiently, so they are highly reliable and responsive, and because they consume very little power, they can be deployed where battery replacement is difficult or inconvenient. Perhaps more importantly, they’re cheap enough to be used almost anywhere. Market analyst IDC reports that 28.1 billion microcontrollers were sold in 2018 and forecasts that annual shipments will grow to 38.2 billion by 2023.

ML on microcontrollers becomes our new technology for analyzing and understanding IoT-generated data. In particular, deep learning methods can be used to process information and make sense of data from sensors that do things like detect sound, capture images, and track motion.

Advanced pattern recognition in compact format

By studying the mathematics involved in machine learning, data scientists have discovered that they can reduce complexity by making certain changes, such as replacing floating-point calculations with simple 8-bit operations. These changes create machine learning models that work more efficiently and require fewer processing and memory resources.

The rapid development of TinyML technology has benefited from the participation of new technologies and developers. It was only a few years ago that we celebrated our ability to run a speech recognition model capable of detection using only 15 kilobytes (KB) of code and 22KB of data on a constrained Arm Cortex-M3 microcontroller certain words in order to wake up the system.

Since then, Arm has introduced new machine learning (ML) processors called the Ethos-U55 and Ethos-U65, a microNPU specifically designed to accelerate ML inference in embedded and IoT devices.

Compared to the impressive examples we saw today, the Ethos-U55 combined with the AI-capable Cortex-M55 processor will greatly improve machine learning performance and improve energy efficiency. We expect responsive chips to be available within the next 12 months.

TinyML takes edge devices to the next level

The potential use cases for TinyML are almost limitless. Developers are already working with TinyML to explore a variety of new ideas: changing signals in response to lights to reduce congestion, industrial machines predicting when maintenance is needed, sensors monitoring crops for harmful insects, robots that can request replenishment when stocks are running low Shelves, medical monitors can track lifecycles while maintaining privacy, and more.

TinyML can make endpoint devices more consistent and reliable, as there is no longer a need to rely on a busy, crowded, expensive internet connection to the cloud, nor do complex data transfers. Reducing or even eliminating interaction with the cloud has the following benefits: reduced energy consumption, significantly reduced latency in processing data, and improved security.

Of course, these TinyML models that perform inference on a microcontroller are not intended to replace the complex inference currently being done in the cloud, which is worthless. What they’re trying to do is bring specific functionality down from the cloud to the endpoint device. This way, developers can keep the cloud interactive when needed.

TinyML also provides developers with a powerful new set of tools to solve problems. ML makes it possible to detect complex events that are difficult for rule-based systems to identify, so endpoint AI devices can start new tasks. And, because ML makes it possible to control devices with words or gestures rather than buttons or smartphones, devices can be more robustly deployed in more challenging operating environments.

TinyML’s expanding ecosystem

Industry players have quickly recognized the value of TinyML and moved quickly to create a broad ecosystem. Developers of all levels, from enthusiastic hobbyists to seasoned professionals, now have access to easy-to-start tools. All you need is a laptop, an open-source software library, and a USB cable to connect the laptop to an inexpensive development board that costs as little as a few dollars. In fact, in early 2021, Raspberry Pi released their first microcontroller board, which is one of the cheapest development boards on the market at just $4. The chip, called Raspberry PiPico, is powered by the RP2040 SoC, a powerful dual Arm Cortex-M0+ processor. The RP2040 MCU is capable of running TensorFlow Lite Micro, and we expect a wide variety of ML use cases for this board in the coming months.

Arm is a strong proponent of TinyML because our microcontroller architecture is critical for IoT, and because we see the potential for on-device inference. Arm’s partnership with Google makes it easier for developers to deploy endpoint machine learning in power-conscious environments. The Arm CMSIS-NN library, combined with Google’s TensorFlow Lite Micro (TFLu) framework, enables data scientists and software developers to take advantage of Arm’s hardware optimization capabilities without having to be an expert in embedded programming. On top of that, Arm has invested heavily in Cortex-M hardware, Keil MDK, and optimization tools for our IoT operating system, Mbed OS, to help developers quickly go from prototype to production when deploying ML applications.

TinyML would not be possible without many early contributors. Pete Warden, “founder” of tinyML and technical lead of Google TensorFlow Lite Micro. Kwabena Agyeman, an innovator from the Arm ecosystem, who developed OpenMV, a project dedicated to supporting machine learning algorithms for low-cost, scalable, Python-based machine vision modules. Another Arm ecosystem innovator, Daniel Situnayake, founder of Edge Impulse, tinyML engineer and developer, which provides a complete TinyML pipeline covering data collection, model training, and model optimization. Additionally, Arm’s partners such as Cartesiam.ai, the company that provides NanoEdge AI, a tool that creates software models on endpoints based on observed sensor behavior under real-world conditions, are taking TinyML to another level .

Arm is also a partner in the TinyML Foundation, an open community coordinating to help people connect, share ideas, and get involved. There are many localized TinyML meetups, including in the UK, Israel and Seattle, as well as tinyML summits worldwide.