• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Life at Cadence
  3. AI Everywhere and for Everyone!
Sanjive Agarwala
Sanjive Agarwala

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Discover what makes Cadence a Great Place to Work

Learn About
ml
ai-driven
AI

AI Everywhere and for Everyone!

14 Sep 2021 • 6 minute read

Cadence helps deploy Energy Efficient, Intelligent Devices over the Edge

The AI landscape is evolving and has become a regular part of advanced technology. According to Omdia, the AI market will have a compound annual growth rate of 37%, reaching $50 billion by 2025. It presents an opportunity for devices to become smarter, more secure, energy-efficient, and have quick decision-making by shifting some functionality over the edge. At Cadence, our goal is to help deploy energy-efficient intelligent on-device edge computing solutions to meet the market requirements from sensors to wearables/hearables to mobile and IoT to vision.

These needs are dynamic and demand a careful balance between performance, Power, area (PPA), and workload per application, as in Figure 1. Here, the performance may vary from a few Tera operations per second (TOPS) to 100 TOPS with low latency and optimum network performance.

Dynamic Demands for On-Device AIFigure1: Dynamic demands for on-device AI

Similarly, Power has diverse use cases from always-on modes to turbo modes ranging from few milliwatts to few watts, and workload interfaces vary dramatically based on target sensors and workload ranging from 4/8/16 bit to fixed float in multimode solutions.

Cadence is proud to be at the forefront of this technological revolution. Our mission is simply to deploy energy-efficient, intelligent, on-device edge computing solutions. Cadence's broad portfolio of low-power, programmable DSP and AI engines in on-device AI IP meets every single one of these needs. Cadence’s three-pronged on-device edge AI strategy supports a range of software frameworks, providing pervasive intelligence across several industries.

On-Device AI SoC needs: A developer/ customer perspective

The speed of deployment and cost are the fundamental needs for AI everywhere. At a higher level, the requirements are

  • Low cost, reduced time to market, and product differentiation.
  • Less Power, scalable solution.
  • Configurable and extensible software and hardware platform for ease of deployment.

Cadence On-Device AI: Cadence Perspective

Cadence helps deploy AI and enables customization of the solution in terms of hardware and software to reduce the time to market (TTM). We believe the major requirements will be

  • Domain Specific, extensible, and configurable AI solutions.
  • Comprehensive AI software.
  • Scalable platforms from low Giga operations per second GOPS to high TOPS.
  • Scalability of the entire platform.

The Tensilica Strategy to Expand in AI

Cadence's Tensilica provides domain-specific configurable, and extensible AI solutions. It offers customers a simple mechanism to deploy its capabilities using embedded CPUs or domain-specific digital signal processors (DSPs).

Along with the hardware, the complete software toolchain of compilers, real-time operating system (RTOS) layers, linkers, loaders, and debuggers from Cadence help in rapid deployment. In addition, to signal processing strength, Tensilica, with built-in AI capabilities, offers even more comprehensive and configurable hardware and software solutions.

Resegmentation: A three-pronged approach

From AI rapid development perspective, we are resegmenting our approach into three prongs, as shown in Figure 2– AI Base, AI Boost, and AI Max.

Cadence 3 Dimension AI SolutionFigure 2: Cadence 3 Dimension AI Approach

AI Base

This single pipeline for processing and influencing is built on Cadence's configurable and extensible SIMD (single instruction multiple data)/VLIW (very long instruction word) architecture. It covers the always-on, low-power, wearable, and hearable market segment, IoT appliances, surveillance, DMS, and low-end products, including mobile and AR glass technologies. We handle a 30X higher performance compared to CPU and 5-10X better energy efficiency.

AI Boost

AI Boost includes AI Base features and advanced AI capabilities extended to Level 2/ LevelL3 automotive, mid- to high-end mobile gaming, AR, laptops, drones, and robotics. Coupled with our sparse AI Engines, this can be a plug-and-play solution when deployed in tandem with our configurable signal processing DSP.

Our neural network engines (NNEs) go from 64 GOPS all the way up to 4 TOPS — 4X higher in performance than ResNet-50 or NVIDIA Pascal GPU. Even compared with the AI Base solution, this is 80% more energy efficient, with more than 4X TOPS per watt (TOPS/W).

AI Max

Using our scalar/vector processors, we add ISA extension capabilities and AI Engines based on random sparse compute technology and real-time runtime compression. We build multi-core systems with security and memory capabilities to offer best-in-class targeted AI solutions optimized for target applications.

Covering the middle of the market and stretching to Level 4 automotive, we see 20X higher performance compared to the Pascal GPU and much better TOPS/W and TOPS/mm2 compared with competitors. Cadence's Tensilica options cover a variety of market needs and empower our customers to implement their chosen solutions.

Software is at the center of this solution — for ease of use and TTM.

It is difficult to achieve real-time inferencing on edge devices without comprehensive software. Cadence's holistic approach offers environments with interpretive, real-time execution capabilities like delegates and ANNs. Our toolchains are based on pruning and quantization technologies that use sparse engines to offer float-like accuracy. The compute-to-data ratio is of paramount importance here. We use pruning and clustering to reduce the model size and tensor compression to optimize the memory bandwidth needs of the applications.

Market use cases 

AI Base intelligent speakers are one of our industry-leading solutions, where customers take advantage of its keyword detection, sound analytics, and natural language processing. There is a significant DSP workload in the capture, echo cancellation, encoding, decoding, and voice enhancement between the mic and speaker.

Ultra Low Power Concurrent AI CPUWe drive this with ultra-low-power, single concurrent processing, and lightweight AI to offer 20X higher performance than a CPU- or MCU-based solution. This technology is currently deployed in leading smart speakers with long battery life.

AI Boost

AI Boost is being used widely by our customers in the audio and vision segment With the advent of COVID, online conferencing and collaboration are now critical for businesses across the globe. We deploy our DSP, and AI capabilities with AI Boost as lower-power, always-on audio and visual use cases have grown. 

Automotive Use Cases

When it comes to automotive use cases, customers have used all three: AI Base, AI Boost, and AI Max. Both signal processing and AI inferencing power in-cabin applications like driver monitoring, sound bubbles, noise suppression, and echo cancellation.

AI Engines pair with ADAS applications for multi-sensor, multi-modal light detection and ranging (LiDAR) solutions — scaling up to 4 TOPS. AI Max is deployed for autonomous imaging radars and multi-sensor high TOPS applications — multiple DSPs, embedded CPUs, along with AI Base and a memory system in security — for a fully autonomous solution. 

Summary

We are transitioning to a new era of quick decision-making (data-driven) with the least latency by migrating AI over the edge. Rapid design, deployment, low cost, scalability, and energy efficiency are the main catalysts behind the transition.

 Customers need a customizable solution for hardware and software to meet these demands and the broad spectrum of applications. This broad portfolio of low-power, programmable DSP needs energy-efficient AI solutions to achieve optimum tera operations per second (TOPS)/W and (TOPS/mm2).

To take advantage of these changes, Cadence provides rapid deployment solutions spanning sensor fusion applications, face detection sound analytics, surveillance, drone technologies, etc. Our comprehensive software spans every market segment and is built for both now and the future. We're enabling AI everywhere — for everyone.

Learn more:

  • Cadence AI-IP Platform
  • Tensilica Processor IP - Cadence
  • Tensilica HiFi DSPs for Audio, Voice, Speech, and AI - Cadence
  • Automotive Solutions - Cadence
  • AI / Machine Learning Solutions - Cadence

CDNS - RequestDemo

Have a question? Need more information?

Contact Us

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information