• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Analog/Custom Design
  3. Spectre Tech Tips: Spectre X High-Capacity Circuit Simu…
Stefan Wuensche
Stefan Wuensche

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us
Voltus-XFi
EMIR Analysis
Spectre Circuit Simulator
Spectre X Simulator

Spectre Tech Tips: Spectre X High-Capacity Circuit Simulation

25 Apr 2023 • 4 minute read

 Especially for EMIR analysis, I’m used to simulating very large designs with Spectre X. Commonly, these designs have up to a few 100 million nodes, a few million MOSFET devices, and a few 100 million parasitic resistors and capacitors. The related simulations take days, depending on the circuit activity and the performed EMIR analysis.

Nevertheless, recently, I received a huge extracted standard cell-based digital design, and it was interesting to test Spectre X on it using a regular transient (non-EMIR) analysis. The design has 400 million nodes, 30 million bsimcmg devices, 1500 million parasitic capacitors and 500 million parasitic resistors. Furthermore, there were 300,000 measurement statements (.meas) in place for checking timings.

Initially, I started a Spectre X LX simulation (+preset=lx) with the measurements disabled to get a ballpark on the performance expectation. This simulation used 8 cores, and it took 1d 22h, and 800GB of memory. Next, I ran the same simulation with the measurements enabled. This simulation took 9d 17h. The measurement-related performance degradation was caused by two effects:

  • The measurements cause an overhead at each time step.
  • By default, the measurements in Spectre enforce a time step at the exact measurement point. Therefore, the simulation with the measurements had about 15x more time steps.

I reran the simulation with the measurements, but this time, disabled the measurement-related time step enforcement with the option mdlthresholds=interpolated. This simulation took 4d 5h.

Since we were concerned about the accuracy of LX mode, next we started a Spectre X AX simulation (+preset=ax), but this time, we changed from using 8 cores to 32 cores. This simulation took 1d 20h, and it used 2.2x more time steps than the LX run.

In the meantime, I had also run a Spectre X AX run with the enforced measurement time steps. The results of this simulation could be used as golden accuracy reference. Two questions needed to be answered:

  • What is the error caused by the measurements using interpolation when compared to enforced time steps?
  • What is the error of Spectre X LX mode when compared to Spectre X AX mode?

Interestingly, the simulation error caused by the measurement interpolation was only 0.2%, and we could, therefore, conclude to use the mdlthresholds=interpolated option for all future simulations. On the other hand, LX mode introduced timing errors of 4-5%, which was not meeting the required measurement accuracy.

Here is a table summarizing the performance observations.

Design size: 400M nodes, 30M bsimcmg, 1500M C, 500M R

Timing measurements

Elapsed Simulation Time, Memory Usage

Time steps

LX, 8 core

none

1d 22h, 800GB

1.2k

LX, 8 core

enforced time steps

9d 17h, 800GB

18k

LX, 8 core

interpolated

4d 5h, 800GB

1.2k

AX, 8 cores

enforced time steps

32d 8h, 850GB

62k

AX, 32 cores

interpolated

1d 20h, 900GB

2.6k

Here are the learnings from this investigation:

  • For such huge designs, we need a computer with sufficient memory of 1TB or 1.5TB. Otherwise, the simulation may not succeed, or the simulation time may blow up due to memory swapping.
  • The Spectre X multi-core technology is the key for getting good simulation turnaround times for such large designs. 32 cores or more are a must.
  • The Spectre default behavior of enforcing time steps at the exact measurement points guarantees perfect accuracy. On the other hand, it degrades performance and there may be situations like the one discussed, where it makes sense to disable it.

Good to see was that Spectre X provides the high-capacity simulation engine allowing to analyze such huge 400 million node design with SPICE accuracy within 2 days. Please keep in mind that the simulation performance depends on many factors, such as design size, devices and elements being used, complexity of the design, and activity of the design.

As part of the ongoing Spectre X development, we are also exploring how we can further improve the performance for this design.

Related Resources

  • Spectre Classic Simulator, Spectre Accelerated Parallel Simulator (APS), and Spectre Extensive Partitioning Simulator (XPS) User Guide
  • Introducing Spectre X
  • Getting the most out of Spectre X 

You may also contact your Cadence support AE for guidance.

For more information on Cadence products and services, visit www.cadence.com.

About Spectre Tech Tips

Spectre Tech Tips is a blog series aimed at exploring the capabilities and potential of Spectre® circuit simulator. In addition to providing insight into the useful features and enhancements in Spectre, this series broadcasts the voice of different bloggers and experts who share their knowledge and experience on all things related to Spectre. Click Subscribe in the Subscriptions box to receive notifications about our latest Spectre Tech Tips posts.


CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information