• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Digital Design
  3. How AI-Based Cadence Cerebrus Improves Performance and Reduces…
Vinod Khera
Vinod Khera

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us
cerebrus
PPA Improvement
Cadence Cerebrus

How AI-Based Cadence Cerebrus Improves Performance and Reduces Area for TI

7 Nov 2023 • 5 minute read

Microcontrollers (MCUs) have become the backbone of embedded designs and are fueling the design of various applications. Their importance cannot be overstated, as they offer an enormous opportunity for chip manufacturers. It is a fact that the MCU market is projected to reach a staggering USD 60 billion by 2030, making it a highly lucrative industry.

 Courtesy: Precedence Research

In today's fast-paced technological world with a vast array of applications, there is a wide variety of MCUs to choose from, each with its own unique peripheral and memory requirements. These variations of peripherals and memory make it challenging for chip designers to fine-tune the synthesis and place and route (PNR) recipe for each MCU. But fret not, as there is a solution to this conundrum.  At CadenceLIVE India 2023, Texas Instruments (TI) revealed that incorporating Cadence Cerebrus technology helped them improve their PPA and critical design area by 4.4% and decrease violating paths by 26 times. This led to a one-week decrease in manual effort for timing Engineering Change Order (ECO) cycles. Moreover, Cerebrus demonstrated noteworthy enhancements in a flat system on chip SoC, even with constrained physical boundaries, enabling them to push architectural limits within a tight timeframe. Despite frequency push, they maintained a 7.37% gain in the standard cell area.

SoC Timing Closure Challenges

The increasing density and reduced die sizes pose many challenges. Before getting into the details of the solution and results, let's have a quick look at the SoC timing closure challenges being faced by chip designers.

  • SoCs die-size can be I/O limited or macro limited
  • Legacy requirements make SoCs rigid (I/O or Macro placement)
  • Spinoffs will not have the luxury to explore ideal placement for fixed components
  • Proprietary core and re-use IPs prohibit the implementation of architectural feedback
  • Die-size finalized before the final dash to freeze the "probe coordinate"
  • Parallel ongoing activities related to I/O ring, power gain, floor plan, and constraint development, along with incremental changes on RTL during the trial

All these issues happen in parallel, making the time closure, synthesis, and PNR completion very difficult in this tight and restricted schedule. This is where Cadence Cerebrus becomes a game changer; the AI-based self-learning tools offer the best results based on cost as provided by the end user.

Solution

TI mentioned that Cadence Cerebrus demonstrated significant power, performance, and area (PPA) improvement in flat SoC, which is macro-dominated with restrictive physical boundaries, pushing architecture limits with a tight schedule. Cadence Cerebrus deployment provided TI with a unique solution to address PPAS improvement, which is otherwise not possible through the regular flow. Below are some use cases, as TI demonstrated, to showcase the area and performance improvements achieved by leveraging Cadence Cerebrus.

Use Case 1

TI considered a device with the below details and numerous placement concerns for the macros and I/Os

  • Macro-dominated SoCs with more than 70 macros
  • 6 million instances
  • 30+ analysis view
  • Flat timing closure

During the "cold start" on a trial RTL with macro list complete 95% of RTL and constraints in place with acceptable base timing closure. ​It took 22 days to complete, with a 4.2% area gain. ​This model file was used as input for “warm start” on the next RTL release, resulting in an area gain of 4.5%, but it took 18 days to complete.​ TI used the “replay” feature of Cerebrus using the best scenario from the "warm start" to get the same benefit offered by “warm start” with a cost of just 10 hours runtime compared to the “base run”!

Also, leveraging Cadence Cerebrus TI achieved direct improvement in utilization with a 3.5% reduction in density and reduction in hotspots, leading to reduced DRCs. Further, they were able to achieve:

  • 3X reduction in TNS at the post route stage maintained in signoff​
  • 26x reduction in setup violations with more than 100ps reduction in WNS on critical IPs
  • Hold numbers are slightly increased but easily fixed with the help of TSO
  • WNS improvement has resulted in an almost 1-week timing ECO cycle reduction
  • Improvement in critical timing path due to logic restructuring in Cerebrus run

Use Case 2: Frequency Push

For TI, timing and performance are key measures, so they considered macro-dominated SoCs with more than 160 macros. TI deployed Cadence Cerebrus for performance improvement in this timing-critical SoC with:

  • Flat timing closure
  • 60+ views
  • 5M instances

The Cadence Cerebrus “cold start” was deployed initially with an 8% area gain. TI designers observed that both "base" and Cadence Cerebrus timing are met comfortably, resulting in the increased system clock frequency by 5 MHz. It resulted in a positive TNS shift in the 5Mhz frequency push experiment using the “warm start” in a design that is 2X the size of test case 1. TI designers could sustain a 7.37% standard cell area gain despite the frequency push. 

Also, they noticed a direct improvement in utilization and a reduction in hotspots, enabling faster DRC closure.

Key Features Leading to the Adoption of Cadence Cerebrus by TI

  • It takes user-customized flow, and it produces scenarios based on that
  • These scenarios are judged based on the cost of the scenario (function of PPA metric)
  • Cadence Cerebrus runs multiple scenarios in parallel, and the AI engine determines whether to stop, continue, or spin off into more scenarios
  • This approach helps in optimizing the flow and helps reduce the cost of running scenarios
  • Further, it enables us to choose PPA metrics per the design criticalities for the scenario cost calculation
  • The UI gives a clear picture of the PPA metric in HTML and the percentage improvements in cost
  • Flexibility of use for engineers, as they can select a scenario even if it is discarded

Conclusion

Cerebrus demonstrated significant PPAS improvement in flat SoC, which is macro-dominated with restrictive physical boundaries, pushing architecture limits with a tight schedule.​

Testcase 1​

  • 4.4% area gain on PPAS critical design.​
  • 26x reduction in violating paths, directly decreasing manual effort in timing ECO cycles by one week.​

Testcase 2 ​

  • Positive TNS shift in 5Mhz frequency push experiment using “Warm Start” in a design which is 2X the size of testcase 1.​
  • Able to sustain 7.37% standard cell area gain in spite of frequency push​
  • Direct improvement in utilization reduction in hotspots enabling faster DRC closure in both., while the “replay” feature saves runtime.

Resources

  • Cadence Cerebrus Intelligent Chip Explorer
  • Machine Learning Full-Flow Chip Design Automation
  • Keep up with the revolution—Cadence Cerebrus Intelligent Chip Explorer

If you missed the chance to watch them live, register at the CadenceLIVE India On-Demand site to watch it and all other presentations.


CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information