Never miss a story from Analog/Custom Design. Subscribe for in-depth analysis and articles.
Especially for EMIR analysis, I’m used to simulating very large designs with Spectre X. Commonly, these designs have up to a few 100 million nodes, a few million MOSFET devices, and a few 100 million parasitic resistors and capacitors. The related simulations take days, depending on the circuit activity and the performed EMIR analysis.
Nevertheless, recently, I received a huge extracted standard cell-based digital design, and it was interesting to test Spectre X on it using a regular transient (non-EMIR) analysis. The design has 400 million nodes, 30 million bsimcmg devices, 1500 million parasitic capacitors and 500 million parasitic resistors. Furthermore, there were 300,000 measurement statements (.meas) in place for checking timings.
Initially, I started a Spectre X LX simulation (+preset=lx) with the measurements disabled to get a ballpark on the performance expectation. This simulation used 8 cores, and it took 1d 22h, and 800GB of memory. Next, I ran the same simulation with the measurements enabled. This simulation took 9d 17h. The measurement-related performance degradation was caused by two effects:
I reran the simulation with the measurements, but this time, disabled the measurement-related time step enforcement with the option mdlthresholds=interpolated. This simulation took 4d 5h.
Since we were concerned about the accuracy of LX mode, next we started a Spectre X AX simulation (+preset=ax), but this time, we changed from using 8 cores to 32 cores. This simulation took 1d 20h, and it used 2.2x more time steps than the LX run.
In the meantime, I had also run a Spectre X AX run with the enforced measurement time steps. The results of this simulation could be used as golden accuracy reference. Two questions needed to be answered:
Interestingly, the simulation error caused by the measurement interpolation was only 0.2%, and we could, therefore, conclude to use the mdlthresholds=interpolated option for all future simulations. On the other hand, LX mode introduced timing errors of 4-5%, which was not meeting the required measurement accuracy.
Here is a table summarizing the performance observations.
Design size: 400M nodes, 30M bsimcmg, 1500M C, 500M R
Elapsed Simulation Time, Memory Usage
LX, 8 core
1d 22h, 800GB
enforced time steps
9d 17h, 800GB
4d 5h, 800GB
AX, 8 cores
32d 8h, 850GB
AX, 32 cores
1d 20h, 900GB
Here are the learnings from this investigation:
Good to see was that Spectre X provides the high-capacity simulation engine allowing to analyze such huge 400 million node design with SPICE accuracy within 2 days. Please keep in mind that the simulation performance depends on many factors, such as design size, devices and elements being used, complexity of the design, and activity of the design.
As part of the ongoing Spectre X development, we are also exploring how we can further improve the performance for this design.
You may also contact your Cadence support AE for guidance.
For more information on Cadence products and services, visit www.cadence.com.
Spectre Tech Tips is a blog series aimed at exploring the capabilities and potential of Spectre® circuit simulator. In addition to providing insight into the useful features and enhancements in Spectre, this series broadcasts the voice of different bloggers and experts who share their knowledge and experience on all things related to Spectre. Click Subscribe in the Subscriptions box to receive notifications about our latest Spectre Tech Tips posts.