Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
A design team at a customer of mine started
out with Specman for the first time, having dabbled with a bit of
SystemVerilog. I can't reveal any details of their design, but suffice to say
they had a fun and not-so-simple challenge for me, the outcome of which I can
share. Unlike some customers (and EDA vendors) who think it's a good test for a
solver to do sudoku or the N-Queens puzzle (see this TeamSpecman blog post /blogs/fv/archive/2011/08/18/if-only-gauss-had-intelligen-in-1850.aspx),
this team wanted to know whether IntelliGen could solve a tough real-world
The data handled by their DUT comes in as a
2D array of data bytes, which has been processed by a front-end block. The
data in the array can contain multiple errors, some of which will have been
marked as "known errors" by the front-end. Other "unknown" errors may also be
present, but provided that the total number of errors is less than the number
of FEC bytes, all the errors can and must be repaired by the DUT. If too many
errors are present, it is not even possible to detect the errors, so the
testbench must generate the errors carefully to avoid meaningless stimulus. It
also needs to differentiate between marked and unmarked errors so that the
DUT's corrections can be tested and coverage performed based on the number of
each type of error.
This puzzle is rather more complex than the
N-Queens one: we have multiple errors permitted on any single column or row in
the array, and there are three possible states for each error: none, marked and
unmarked. There is an arithmetic relationship between the error kinds: twice
the number of marked errors than unmarked can be corrected. Furthermore, unlike
the N-Queens, a test writer may wish to add further constraints such as
clustering all the errors into one row, fixing the exact number of errors, or
having only one kind of error.
First we define an enumerated type to model
the error kind:
By modelling the 2D array twice, once as
complete rows and once as complete columns, we can apply constraints to a row or
column individually, as well as to the entire array. We only look at whether to
inject an error, not what the erroneous data should be (this would be the second stage). I've only shown the row-based model here, but the column-based one is identical
bar the naming.
The row_s represents one row from the 2D
array, with each element of "col" representing one column along that row. The
constraints on num_known and num_unmarked limit how many errors will be
present. These are later connected to the column-based model in the parent
The effective_errors field and its
constraints model the relationship between the known and unmarked errors,
whereby twice as many known errors than unmarked errors can be corrected.
Next we define the parent struct which
links the row and column models to form a complete description of the problem.
Here "cols" and "rows" are the two sub-models, and the other fields provide the
top-down constraint linkage.
The intent is that the basic dimensions are set within the base environment, and the remaining controls are
used for test writing.
Next, we look at the constraints which
connect the row and column models together. The first things to do are to set
the dimensions of the arrays based on the packet dimensions, and to cross-link
the row and column models. These are structural aspects that cannot be
changed. The rest of the constraints tie together the number of errors in each
row, column, and the entire array. By using bi-directional constraints, we are
allowing the test writer to put a constraint on any aspect.
And that's it! With just that small amount
of information IntelliGen can generate meaningful distributions of errors in a
controlled way. Test writers can further refine the generated error maps with
simple constraints that are actually quite readable:
Notice another little trick here: the use
of a named constraint: "packet_mostly_correctable". This allows a test writer
to later extend the error_map_s and disable or replace this constraint by name;
far easier than figuring out the "reset_soft()" semantics and a whole lot more
Note that for best results, this problem
should be run using Specman 13.10 or later due to various improvements in the
Cadence Design Systems