Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Blogs Breakfast Bytes Announcing Sigrity X

Author

Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
x-technology
computational software
Signal Integrity
Sigrity

Announcing Sigrity X

16 Mar 2021 • 6 minute read

  There are many different computational software algorithms used in EDA. One challenge of EDA is that design groups are always creating the next generation of SoCs on the current generation of processors. In the 1990s and 2000s, however, the microprocessor companies—mainly Intel, but also Sun, HP, Digital, and others—solved this problem by increasing the performance of processors by about 50% per year. Some of this was raw Moore's Law, increasing the performance of the underlying silicon without creating a power problem. And some was from processor architects coming up with smarter ways to do out-of-order execution, branch prediction, and all the other tricks. Moore's Law increased the clock cycle frequency, and the architectural tricks increased the instructions-per-cycle (IPC). These multiplied together. So if you needed higher performance, you just had to wait. Life was good!

Then life stopped being so good. Two things happened. First, it became impossible to increase the clock frequency of microprocessors due to power constraints. Second, the architects pretty much ran out of tricks. Moore's Law was not over, in the sense that you could still put more and more transistors on a chip, but increased processor power was no longer delivered as increased single-thread performance but rather as an increased number of cores. I created the term "Core's Law" that the number of cores was increasing exponentially, it just wasn't noticeable because we were on the flat part of the curve. However, the name never caught on. Now that processors have 48 cores, or even 128 cores, this is much more obvious. What is less obvious is how to adapt the computational software algorithms to this new normal of more cores. 

As I put in in my post Under the Hood of Clarity and Celsius Solver:

Under the hood there is a massively parallelized matrix solver. This is a breakthrough algorithm and is part of Cadence's secret sauce in the system analysis area. It has near-linear scalability without any accuracy loss. It also has virtually unlimited capacity using a large number of low-capacity machines without requiring any huge machines that are either unavailable when you want them, or are sitting idle much of the time waiting for you to show up to use them. The whole infrastructure is dynamically deployed into the cloud (or a data center) and has fault-tolerant restart, since, with huge numbers of machines, rare things happen regularly.

A number of EDA algorithms are implemented as solving a huge number of equations encoded in the form of sparse matrices. A sparse matrix is one in which most of the entries are zero. This means that they can be stored in computer memory very efficiently since the zeros do not need to be recorded explicitly. Often, these matrices are symmetric, leading to a further saving since only half of the matrix needs to be recorded. This is because many electrical features are symmetric: the capacitance from node 1 to node 2 is the same as the capacitance from node 2 to node 1. One of the breakthroughs that Cadence has made in computational software in the last few years is an understanding of how to do matrix algebra with these large sparse matrices across a very large number of cores and/or servers. This technology underlies Cadence's Voltus, Clarity, Celsius, and other tools. To dive a little deeper, see my posts System Analysis: Computational Software at Scale.

And now Sigrity joins the club.

Sigrity X

Sigrity X delivers up to 10X performance without any loss of accuracy. This is achieved using massively distributed simulation in the cloud (or large on-premises data centers). This is basically the same massively parallel simulation technology that is the foundation of the Clarity 3D solver. The solver does power-aware signal integrity analysis. One of the biggest challenges with analyzing signal integrity is that everything depends on everything else. Power affects temperature, which affects IR drop, which affects timing, which affects signal integrity. 

Another new development in the hybrid solver is the multi-threaded sweeping. Signal integrity exploration scales linearly with the number of cores (since each configuration being explored is independent of the others, so continuous communication is not required).

The Sigrity X technology is available across the range of Sigrity products: PowerSI, PowerDC, XtractIM, SystemSI, and OptimizePI.

But the solver is not the only thing that has changed in the latest Sigrity release. There is a new easier-to-use user interface known as Layout Workbench. This now supports both light and dark themes (like on your phone) depending on which you prefer, which might vary depending on your location and time of day. This is the same setup GUI as the Clarity 3D Solver product.

There is also a new database for 2021. This makes it easier to move simulation files between machines since everything is now encapsulated in a single file for all simulation types. The archive function is also improved to handle any other dependencies.

Here's an example to show how dramatic the improvement is in the new release. This sample design has:

  • 20 layers
  • 68,807 bumps
  • 1,006,136 vias
  • 483,894 traces

With the 2019 PowerSI Hybrid Solver, this required 15 days to complete. With the new 2021.1 Hybrid Solver, and using the same number of cores, this completes in 1.5 days.

Two hot areas for signal integrity analysis right now are PAM4 and the DDR5 memory interface:

  • PAM4 is a signaling technique using four voltage levels and so transferring two bits per (recovered) clock cycle. This is used for 112G SerDes and also will be used in the upcoming PCIe 6.0 standard (which is not finalized, but the PAM4 part will not change). For more about that, see my posts Signal Integrity for 112G and The History of PCIe: Getting to Version 6.
  • DDR5 is the latest version of the DDR DRAM interface, which is gradually becoming a larger segment of the memory interface market. For more about that, see my post 2020 Is the Year of DDR5 (that proved to be a bit optimistic since the DDR5 standard was only finalized and published in July 2020). DDR5 is expected to be the most-used interface by 2022 (although Cadence has been working with Micron on DDR5 interfaces for years — see my post DDR5 Is on Our Doorstep for more on that).

Experience with the New Release

But don't just take my word for it. Here's Tamio Nagano of Renasas:

Using the new Sigrity 2021 release, important processes for IC package signoff were improved dramatically; simulations that took more than a day to complete can now be completed in just a few short hours. We are excited about the adoption of this new technology, with a proven performance improvement of 10X, for our production designs

Or if you are not in automotive, how about 5G? Here's Aaron Yang of Mediatek:

Not only can many designs be analyzed 10X faster with the same accuracy level, but the capability has also been extended to larger and more complex designs that previously could not be analyzed. This productivity builder is allowing us to cut weeks off our design cycles.

Learn More

See the press release Cadence Unveils Next-Generation Sigrity X for Up to 10X Faster System Analysis.

Or attend the upcoming CadenceTECHTALK:

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information