How the shift in embedded development will impact the future of computing

Currently, developers are leveraging secure and performance-enhanced technologies to develop small, low-power embedded systems, enabling previously unimaginable AI applications such as voice, vision, and vibration that are changing the world.

The embedded field is undergoing a profound transformation. Connected devices are evolving into systems that can make decisions on their own based on the data they collect. Completing data processing closer to the collection source is expected to speed up decision-making, reduce latency, solve data privacy issues, reduce costs, and improve energy efficiency compared to data processing at IoT gateways or in the cloud.

Many application fields are driving up the performance and functionality requirements of edge computing, such as industrial automation, robotics, smart cities, and home automation. In the past, the sensors in such systems were much simpler and disconnected. However, artificial intelligence (AI) and machine learning (ML) now increase the level of local intelligence and make decisions on the device. Simple control algorithms used in the past are not possible.

The evolution of general-purpose processors in the AI era

Years ago, developers focused on logic and control algorithms as the core of software development. However, the advent of digital signal processing (DSP) algorithms has enabled a host of enhanced voice, vision, and audio applications.

This shift in application development has entered a new era and is impacting the design of computing architectures. We have now evolved to using inference as the main core of algorithm development. This stage has brought new or higher requirements for computing performance, energy efficiency, latency, real-time processing and scalability.

The industry's needs are not only for new processor accelerators, but also for improvements in general processing capabilities to provide developers with the necessary balance and support applications such as feature inspection or person detection in live video.

A few years ago, developers could only rely on frequency-based filters when creating noise cancellation applications. Now, developers can improve the performance and functionality of their applications by combining filtering with ML/AI models and inference. In order to make these development tasks more efficient and serve users as seamlessly as possible, there is an increasing demand for processors and tools.

Promote the intelligence of edge-side and end-side devices

This evolution and innovation is driven by ML, but it also faces many technical challenges. After years of attempts to create a development method that is universally applicable to the Internet of Things and embedded devices, it has prompted the industry to change the way it develops the Internet of Things to unleash the infinite possibilities of large-scale expansion.

Currently, developers are leveraging secure and performance-enhanced technologies to develop small, low-power embedded systems that enable previously unimaginable voice, vision, and vibration applications that are changing the world. Various versions of programming languages and Transformer models will soon find their way into IoT edge devices with new computing capabilities. This undoubtedly brings more possibilities to developers.

In the process of development evolution and innovation, in order to meet developers' needs for hardware, Arm introduced Arm® Helium™ vector processing technology in the Armv8.1-M architecture a few years ago. Helium brings significant performance improvements to ML and DSP applications in small, low-power embedded devices. In addition, it provides Single Instruction Multiple Data (SIMD) capabilities, taking the performance of Arm Cortex®-M devices to a new level and supporting applications such as predictive maintenance and environmental monitoring.

Helium improves DSP and ML performance, speeding up signal conditioning (such as filtering, noise cancellation, and echo cancellation) and feature extraction (audio or pixel data), which can then be fed into classification using neural network processors.

Implement functions on the intelligent edge

We can see that many Arm partners are introducing Helium technology into their latest products, thereby helping developers take advantage of ML capabilities on constrained devices at the farthest end of the network. In February 2020, Arm launched the Cortex-M55 processor with Helium technology, and Alif Semiconductor launched the first Cortex-M55-based chip in September 2021 and deployed Helium-powered processors in its Ensemble and Crescendo product lines. Cortex-M55 processor. In addition, Himax has also adopted the Helium-equipped Cortex-M55 in its next-generation WE2 AI processor, targeting computer vision systems in battery-powered IoT devices.

In April 2022, Arm launched its second Helium-enabled CPU, the Arm Cortex-M85. Renesas Electronics has conducted technology demonstrations on the Cortex-M85 at embedded world 2022 and embedded world 2023. In the demonstration, Plumerai significantly accelerated its inference engine using Renesas RA MCU technology. As a company that develops complete software solutions for camera-based people detection, Plumerai believes that the performance improvements will ensure that the company's customers can take advantage of a larger and more accurate version of Plumerai's people detection AI, while providing more product features and extend battery life. In November 2023, Arm launched its third CPU with Helium technology-Cortex-M52. This is a processor designed for artificial intelligence Internet of Things (AIoT) applications and can be used for small, low-power embedded devices. It brings significant performance improvements to DSP and ML applications on existing devices, allowing more compute-intensive ML inference algorithms to be deployed in endpoints without the need for dedicated NPUs.

As hardware evolves, developers face increasing software complexity, requiring new development processes to create optimized ML models combined with efficient device drivers. It is critical that the software development platforms and tools provided for the ecosystem also evolve with the hardware.

 

A variety of tools, provided by Arm and third parties, are available today to support end-users in creating AI algorithms. After data scientists create a model in an offline environment, they can use the tools to optimize the model to run on an Arm Ethos™-U-based NPU or use Helium instructions on a Cortex-M-based processor.

Qeexo is the first company to implement end-to-end ML automation for edge devices. Its AutoML platform provides an intuitive user interface (UI), allowing users to collect, clean and visually present sensor data, and use different algorithms to automatically Build ML models. Traditional embedded tools such as the Keil Microcontroller Development Kit (Keil MDK) complement MLOps tools and help establish DevOps processes for validating complex software workloads. As a result, embedded, IoT, and AI applications finally converge into a single development process that is familiar to software developers.

The potential of the edges is gradually being discovered. There is a growing need to improve the performance of microcontrollers, especially for tasks such as voice-activated door locks, people detection and recognition, networked motor control with predictive maintenance, and countless other high-end AI and ML applications.

We believe that with the right technology, developers can reimagine edge and edge devices and strike the right balance between performance, cost, energy efficiency, and privacy—the key elements in these constrained devices—to enable the embedded future of the future. Develop applications that implement AI computing.

Share post:

You may also like