Fostering an AI-IoT Affiliation to Help the Manufacturing Space Get Over its Cost and Performance Challenges

Ceva, Inc, a leading licensor of silicon and software IP that enables Smart Edge devices to connect, sense and infer data more reliably and efficiently, has officially announced the launch of Ceva-NeuPro-Nano NPUs, marking an expansion of its already-extensive Edge AI NPUs assortment. According to certain reports, these new Ceva-NeuPro-Nano NPUs come bearing an ability to deliver the power, performance, and cost efficiencies usually required across semiconductor companies and OEMs when the goal is to integrate TinyML models into their SoCs. This is especially applicable for consumer, industrial, and general-purpose AIoT products. To give you some context, TinyML refers to the deployment of machine learning models on low-power, resource-constrained devices, deployment which actively brings AI prowess to an IoT (Internet of Things) infrastructure. An example of its potential is also evident in a study conducted by the research firm, ABI Research. The stated study basically claimed that more than 40% of TinyML shipments will be powered through dedicated TinyML hardware rather than all-purpose MCUs by the year 2030. However, despite such lofty projections, TinyML continues to suffer from certain performance challenges. This is where Ceva-NeuPro-Nano NPUs come in to make AI economical, practical, and therefore, accessible for a wide range of use cases, including voice, vision, predictive maintenance, and health sensing in consumer and industrial IoT applications.

In essence, the NPU architecture running the show is fully programmable, boasting an ability to execute neural networks, feature extraction, control code and DSP code, as well as support most advanced machine learning data types and operators, including native transformer computation, sparsity acceleration, and fast quantization. Such an optimized self-sufficient architecture, like you can guess, makes it possible for Ceva-NeuPro-Nano NPUs to produce superior power efficiency, with a smaller silicon footprint, and optimal performance compared to the existing processor solutions used for TinyML workloads. Another component aiding the case of the given NPU is its NetSqueeze AI compression technology, which processes compressed model weights without the need for an intermediate decompression stage, thus empowering the overarching solution to achieve up to 80% memory footprint reduction. This, in particular, solves a key bottleneck blocking broader adoption of AIoT processors today.

“Ceva-NeuPro-Nano opens exciting opportunities for companies to integrate TinyML applications into low-power IoT SoCs and MCUs and builds on our strategy to empower smart edge devices with advanced connectivity, sensing and inference capabilities. The Ceva-NeuPro-Nano family of NPUs enables more companies to bring AI to the very edge, resulting in intelligent IoT devices with advanced feature sets that capture more value for our customers,” said Chad Lucien, vice president and general manager of the Sensors and Audio Business Unit at Ceva.

At launch, the Ceva-NeuPro-Nano NPU will be available in two configurations i.e. the Ceva-NPN32 with 32 int8 MACs, and the Ceva-NPN64 with 64 int8 MACs.

“Ceva-NeuPro-Nano is a compelling solution for on-device AI in smart edge IoT devices. It addresses the power, performance, and cost requirements to enable always-on use-cases on battery-operated devices integrating voice, vision, and sensing use cases across a wide array of end markets. From TWS earbuds, headsets, wearables, and smart speakers to industrial sensors, smart appliances, home automation devices, cameras, and more, Ceva-NeuPro-Nano enables TinyML in energy constrained AIoT devices,” said Paul Schell, Industry Analyst at ABI Research.

Hot Topics

Related Articles