Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
in the most optimistic of discussions, there is an "elephant in the room" that
people don't say much about. Such was the case at the DVCon SystemC Day
Feb. 22, where despite strong attendance and upbeat presentations, there was
only a small amount of discussion about the need for third-party transaction
level modeling (TLM) IP.
of SystemC Day I attended was a North American SystemC Users Group (NASCUG) meeting. It
started out with an Open SystemC Initiative (OSCI)
update by OSCI chair Eric Lish and a keynote by industry analyst Gary Smith. In
addition to several technical presentations about SystemC modeling, it included
a talk by Brian Bailey on TLM design and verification, which I blogged
only discussion of commercial TLM IP came in question-and-answer sessions. Gary
Smith noted that an International Technology Roadmap for Semiconductors (ITRS) working group found that IP
blocks with over a million gates are unusable by most customers. Integrators
need "modifiable" blocks they can assemble and integrate quickly, without
losing too much of the original verification environment. "You can't develop
that kind of block at the RT [register transfer] level," Gary said.
Good turnout, strong technical
sessions at SystemC Day NASCUG meeting - but what about IP models? Above, Jack
difference between RTL modeling and transaction level modeling is that you can
make money at the transaction level," Gary
said. "We have to get IP providers to move up to the transaction level." He
noted that large customers pulled their IP development in house when large
blocks came out, and suggested that it may be because they can't get modifiable
IP at the RT level.
immediately reminded of a recent blog by
Dan Nenni that stated that only about 30 percent of the IP that can be
outsourced is outsourced. Could the lack of commercially available TLM IP be a
is a new company launched by Jack Donovan, former president of XtremeEDA, to
provide hardware and software IP for use with high-level synthesis. "What's the
business model for selling IP at a high level? I'm not sure what it is yet," he
said in a response to a question after his NASCUG presentation on managing code
complexity. He noted that there will have to be a "high degree of assurance"
that TLM models and RTL models operate in the same way.
presentation showed that some TLM modeling is occuring. Herve Alexanian of Sonics described an Open Core Protocol (OCP)
modeling kit from the OCP-IP organization
that supports various levels of TLM abstraction, ranging from TL0 (RTL) to TL4
(loosely timed transaction level), based on the OSCI SystemC TLM-2.0
specification. In another presentation, David Black of XtremeEDA offered some practical
suggestions for developing TLM models without clocks.
As the Brian
Bailey presentation showed, good progress is being made towards a
TLM-driven design and verification flow that can greatly boost productivity and
time-to-market. Tools such as the Cadence C-to-Silicon
Compiler are enabling that flow. And as noted in a SystemC Day tutorial,
work is ongoing on a standard SystemC synthesizable subset.
Now we need
commercial IP providers to come on board with SystemC TLM models. Will they
hear the call?
Good points, Camille. As noted in the DVCon Wednesday panel (and reported in my March 1 blog), using external IP that's not designed for integration may cause even more effort than doing a design from scratch. Adaptability and configurability are important, so long as the needed features are there. What will not work well is a "least common denominator" approach.
I suspect that the lack of commercially available TLM IP is a factor in the reluctance to use outsourced IP but in many cases it is due to the erroneous perception that the effort to adapt the outsourced IP to internal needs is higher than internally sourcing it.
It seems like what is needed is a built-in IP 'adaptability by design' where attention is devoted to deconfigurability in addition to configurability to allow feature stripping and to simplify interfaces down to what is just needed. This also happens to make the task of RT and TLM matching a simpler proposition not to mention power savings and the like. I guess you may be able to get the 'transaction' right but what about the context/legacy and the boundary conditions that may not be easy to abstract?
Glad to see the new capabilities and standards being developed through OCP and the new startups and thanks for posting the helpful links.