Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
The DVCon 2011 conference was held this week and the
Accellera Universal Verification Methodology (UVM) 1.0 release is breaking
records in term of interest and attendance.
UVM 1.0 is a big deal(!) The core functionality is solid and ready for
deployment. Accellera held a full day
tutorial on UVM 1.0 on Monday. And during
a panel discussion on Tuesday afternoon, AMD and Intel announced that they are
in the process of adopting it.
We (I’m wearing my Accellera hat) briefly introduced
the industry-proven basic UVM concepts, but spent most of our time talking about
the great enhancements and new capabilities in UVM1.0. For many in the audience, it was hard to map
the new features to the existing methodology, to understand what is deprecated
and to know what is the recommended use model. Indeed the use models of a few of the new
capabilities have not been finalized by Accellera.
Instead of answering inquiries
individually, (not a scalable solution), I decided to write down a few high-level
notes on each topic. In this first blog I will discuss the transaction level modeling (TLM 2.0) additions and
impact. I want to emphasize that these
notes represent Cadence’s technical views on these topics.
TLM 1.0 ports were heavily used in OVM and in UVM 1.0EA (Early Adopter). The UVM 1.0
release adds a partial SystemVerilog implementation of the Open SystemC Initiative TLM 2.0 capabilities. At DVCon John Aynsley, author of TLM 2.0 spec, did a
great introduction of TLM 1.0 and TLM 2.0 concepts and capabilities (one of the
best I’ve seen so far for TLM). Later he moved on to UVM TLM implementation
both in terms of TLM 1.0 and TLM 2.0, covering the benefits and contrasting it
with the OSCI SystemC capabilities. His slide is shown below:
The TLM2.0 standard was created for modeling memory-mapped
buses in SystemC. Most of the DVCon discussion
was devoted to the concepts of TLM 2.0, with a rich (or complex) set of
capabilities. For example sockets and interfaces, blocking and non-blocking
transports, the generic payload, hierarchical connection, temporal decoupling
and more were covered. The main questions asked were: How much of this is
relevant to functional verification and, specifically, UVM environments? What
do I need to do differently in a UVM verification environment to leverage the
TLM 2.0 potential?
Let’s start by focusing on agents that reside within an
interface UVC. As you can see below,
monitors contain analysis ports. The monitor does interface level coverage and
checking, and distributes events and monitored information to the sequencer,
scoreboard, and other components. Obviously, there is nothing different in UVM from
OVM to replace this kind of distributed one-to-many communication. While this is trivial, this
brings us to Guideline # 1: In the monitor, keep using the analysis port.
Another communication channel is needed between the
sequencer that creates transactions and the driver that sends these to the
Device Under Test (DUT). What we have in UVM (introduced in OVM) is a
producer/consumer port (uvm_seq_item_pull_port) that has the needed API and
hides the actual channels (TLM or others) from the implementation. I know that
there was not always an agreement on this by all vendors, but Cadence was constantly
recommending users to use this abstract layer, as opposed to the direct TLM
ports. TLM 2.0 sockets do not solve all the
communication requirements between the sequencer and the driver (for example
try_next_item semantic is hard to resolve in either TLM 1.0 or TLM 2.0).
Also, as was
mentioned in the Accellera tutorial, the multi-language support is not solved
with UVM 1.0 yet -- and for now, this is a vendor specific implementation. This
is a great time to re-iterate our existing recommendation: Guideline #2: For sequencer-driver communication, use the
abstract producer/consumer ports in your code and avoid using the TLM
connections directly. This will keep your code forward compatible with existing
or future solutions that the implementation uses (we might need extensions to
facilitate cross language communication). Usage of the high-level functions also allow
us, the library developers to add more functionality on get_next_item() and
layer you may need is for stimuli protocol layering. There are multiple ways to
implement layering, but Guideline #2 is valid for this use case as well,
where one downstream component need to pull items from a different component.
If you stick with the abstract API of the producer/consumer port, you are going
to keep your environment safe as we take the liberty of improving the communication
facilities for you.
Let’s review other benefits of the TLM 2.0 and the value that they can
provide to the verification environment. Again, I include John Aynsley’s slide covering
the benefits of TLM 2.0 below. See also my analysis for the individual potential
Let’s review the “value” of these benefits in the context of
To solve this main TLM2.0 requirement, Cadence is working within IEEE 1800 committee
to propose extending the DPI to handle passing of objects between different object-oriented
languages. Requirements such as passing items by reference or querying
hierarchy and others that are not part of TLM 2.0 will be standardized as
language features and will hopefully be supported by all vendors. Cadence is
working with multiple users that ask for this solution. If you wish to support
this effort follow Guideline #5: Join a standardization body or encourage
your vendor to support standard multi-language communication :-)
Summary of recommendations regarding TLM2.0 and
Guideline #1: In the monitor, keep
using the analysis port.
Guideline #2: Use the abstract
producer/consumer ports in your code and avoid using the TLM connections
Guideline #3: Check if and how usage of
GP can help your specific verification challenges.
Guideline #4: Remember that the current
UVM TLM2.0 multi-language support is not part of the standard library and may
lock you to a specific vendor and implementation.
Guideline #5: Join a standardization body or encourage your vendor to support standard
I hope that these notes address the multiple concerns I
heard about the complexity of TLM 2.0 and the amount of required changes for
your existing verification environments. I saw other tutorials that alternate
between verification needs and modeling requirements that have little to do
In summary, if you find the TLM 2.0 extensions to UVM to be
complex, don't worry, you don't really need to bother with them. You will probably find the TLM 1.0
communication more than sufficient for most of your testbench development
needs. You might find the Generic
Payload useful for abstract modeling of transactions, and you can easily adopt
GP without worrying about the rest of the TLM 2.0 complexity. The main requirement for
verifying/integrating SystemC TLM 2.0 models with a SystemVerilog testbench is
not yet part of the UVM standard, so we invite you to join the effort to
standardize a solution for this problem.
Hi Sharon, It is a nice article. I am not able to see the advantages of Generic Payload (GP). Since the fields of the GP cannot be randomized, in Verification Environments It isn't going to be really helpful. So what is the main use of GP? Is there a possibility of extending this GP and probably add constraints to the data and the address fields etc?