Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
[Please welcome guest blogger Silas McDermott, an Application Engineer in our Field Organization]
There is one very easy step that Specman users can take to roughly double the wall clock, run time performance of their testbench. In a word, "compile"!
That's right: by simply compiling their e testbench, we've seen customers enjoy substantial performance increases, often up to 2x faster than uncompiled runs (i.e. up to 1/2 the uncompiled wall clock run time). The reason behind this increase is the simple physics of running code through Specman's interpreter, where additional time is needed to parse the code on-the-fly before the simulation phase can start. With pre-compiled code, Specman does a quick read of the compiled library and launches into the simulation.
Of course, as the saying goes, "your mileage may vary", and you might not see a 2x improvement. But unless you have written total spaghetti code you WILL see some performance improvement for very little effort; so as the marketing folks say, the return on investment is very high.
So if the performance gain from compilation is so dramatic, why doesn't everyone compile their code? To answer this question, consider the following:
Thanks to clever leveraging of e's Aspect Oriented Programming architecture, Specman can run any mix of compiled and interpreted code. This a great thing to support verification work, since in verification you are always adding new tests on top of a base layer of testbench code, and this simultaneous support of both modes give users the best of both worlds (i.e. you get the speed of compiled code with the flexibility tossing new logs on the fire via interpreted mode). Of course, once new tests are themselves matured and considered to be part of the standard regression suite, it makes sense to do a full recompile on everything to get the performance benefit. Naturally, this convenience vs. performance "re-compile all" tipping point is different for every testbench.Finally, since interpreted mode works very smoothly, it's not unusual for new Specman users to incorrectly assume that their code is being compiled vs. interpreted every time (especially if the testbench is relatively small). To be sure you are compiling your code, you can simply execute the following two commands:
1) sn_compile.sh -shlib -exe top.e
2) setenv SPECMAN_DLIB ./libsn_top.so
The first command creates a shared library "libsn_top.so" and the simulator then uses the SPECMAN_DLIB environment variable to find the shared library.
For more information, please see sn_compile.sh in the Specman Command Reference.
Silas McDermottApplication EngineerCadence USA
Srinivasan, it's good to hear from you again,and thanks for the many detailed comments! You have made so many great points that rather than try to squeeze in quick response here, you inspired us to write a whole new posting to address & respond to your notes. Stay tuned ...
Thanks for this very useful tip. FYI: vlsi-verif.blogspot.com/.../specmans-compiled-vs-interprted-mode.html I couldn't agree more with that post - it is really beneficial to explore what can be pushed to compiled code. Some more data from my own experience during my Intel days:
1. We got 3 to 4x gain in compiled mode, though it is 5+ year old data.
2. There are (were?) some restrictions with the usage of computed macros across such compiled SO files/partitions. Not sure if they are still around. I don't recall all the details on top of my head, though if someone is interested I can try and recollect.
3. A very *important* aspect is the debuggability of compiled code. Line stepping feature gets disabled for compiled portions, so if you need to debug with line step go back to loaded mode.
4. Note that "function" level breakpoint is still possible with compiled mode.
5. If there is a null obj access, compiled mode won't reveal much details, while interpreted/loaded mode will point to exact issue. We have used this facility so often that we infact automated this process via script - i.e. if there is a null obj access, a separate process shall spawn off with same test/seed but in loaded mode. This was part of our automation we presented in DesignCon www.iec.org/.../3-wp1.pdf though this specific tip/trick was left undocumented there.
6. Lastly, we did see few corener case scenarios wherein the random generation was different in both modes, Verisity knew about it back then (around 2003?) and said they will fix it sometime in future. Not sure if that still is an issue. Note that it was not easy to reproduce and was truly corner case, so don't ask me for "testcase" now :-)