Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
Random generation is always a complex task, and differences in results are usually very hard to debug. Besides, generation misbehavior always rings many bells in R&D :-)
A customer reported a random stability issue, explaining that the generator (IntelliGen) generated different values with the same seed. One simulation was started from vManager, the other in a Unix shell, and they ran in different run modes (compiled vs. interpreted).
Looking into the (quite complex) environment, it turned out that the beginning of the simulation was identical, but as time advanced the results started to differ. I assume some of you have experienced similar behavior in the past.
A first look revealed that complex list manipulations were performed in many levels of nested method calls. Each list -- a list of units -- was manipulated by several list methods (sort, add, unique, etc.). The results were printed out to the screen where, after a while, the lists started to differ.
So an idea came to mind: The problem is probably not a generation issue, as the static generation was identical in all cases; rather it is a runtime issue, most likely caused by list manipulation. But how could the way the simulation was launched or the run-mode be responsible for the differences?
With no alternative, we proceeded to debug through the source code step by step, and examined the list after every manipulation. The complexity and deep nesting of the code (there were even recursive methods that touched those lists), resulted in about two days of painstaking analysis without finding a difference. Then, we hit pay dirt -- we came across the construct where the lists began to differ. Below is the sample code:
So where is the problem? The code identified that the list of units that was sorted, but did not identify the specific field used for sorting -- the argument (it) referred to the unit itself.
Looking up the sort() method in the Incisive/Specman documentation, we found the following:
The Note in the description seemed to suggest a possible clue.
A scan of the log files showed that the vManager run started a garbage collection at some point before the sorting action, and that the plain Specman simulation logs did not show this garbage collection. This difference in behavior was the result of different memory settings between different simulation runs.
The bottom line: Garbage collections can change the physical memory address of the units in the list, which can affect the sorting of these addresses before and after such a memory operation.
Very informative and useful information. Thanks Hans for sharing this