Cadence® system design and verification solutions, integrated under our Verification Suite, provide the simulation, acceleration, emulation, and management capabilities.
Verification Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
More Support Log In
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
The Cadence Academic Network helps build strong relationships between academia and industry, and promotes the proliferation of leading-edge technologies and methodologies at universities renowned for their engineering and design excellence.
Participate in CDNLive
A huge knowledge exchange platform for academia to network with industry. We are looking for academic speakers to talk about their research to the industry attendees at the Academic Track at CDNLive EMEA and Silicon Valley.
Come & Meet Us @ Events
A huge knowledge exchange platform for academia. We are looking for academic speakers to talk about their research to industry attendees.
Americas University Software Program
Join the 250+ qualified Americas member universities who have already incorporated Cadence EDA software into their classrooms and academic research projects.
EMEA University Software Program
In EMEA, Cadence works with EUROPRACTICE to ensure cost-effective availability of our extensive electronic design automation (EDA) tools for non-commercial activities.
Apply Now For Jobs
If you are a recent college graduate or a student looking for internship. Visit our exclusive job search page for interns and recent college graduate jobs.
Cadence is a Great Place to do great work
Learn more about our internship program and visit our careers page to do meaningful work and make a great impact.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
Overview All Courses Asia Pacific EMEANorth America
Instructor-led training [ILT] are live classes that are offered in our state-of-the-art classrooms at our worldwide training centers, at your site, or as a Virtual classroom.
Online Training is delivered over the web to let you proceed at your own pace, anytime and anywhere.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technology. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
One of the issues that has hindered the progress of using Virtual Platforms for early software development is missing models. I recall seeing Axys Design's Maxsim tool back around 2001 and thinking how cool it was. All the user had to do was drag and drop models and wire them together to create a working Virtual Platform. At the time I was working at Axis Systems so we always called Axys "the other Axis". Axys was eventually acquired by ARM in 2004, but the block diagram editor still exists today in Fast Models from ARM. After the coolness of the demo wore off, I started thinking about the latest and greatest SoC being developed by Samsung, TI, or whoever, and realized that the library to drag-and-drop from probably was missing almost everything needed to create a complete SoC Virtual Platform. I'm sure the ARM CPU was there and probably the memories and a few relatively simple peripherals, but that was about it. Where would the rest of the models come from? Would the Virtual Platform for early software development always suffer from the Missing Model Syndrome?
Over the years it seems a few different approaches have been taken to address the case of the missing models.
The first is to simply brute force as many models as possible to cut down on the number of missing models. This approach is a lot of work, but results in a useful library of models with far more coverage. A drawback of this approach is that SoC designs will always have custom blocks that are the differentiating features of a device (and usually the most complex ones) so there is no way to cover the models for all of the new or proprietary design blocks. Another drawback is that all of the effort is in the model library so the models tend to be closed source or black box models that cannot be modified by the user. The bummer for users is they are now picking tools based on model availability, not based on what they really want to use the models for.
The second approach is to provide a language and tools that enable a user to do the model creation themselves; the "teach a man to fish" approach. Since Virtual Platforms are abstract and start from the programmers view of the device, a programming language or description of what the hardware does can be used. This has the advantage of letting users create models without vendors having to add something to a library. A drawback of this approach is that it relies on the person writing the description to correctly describe the hardware behavior, in most cases from a paper specification. The result can be a model that doesn't behave like the actual device leading to software that works great on the model, but doesn't work on the actual silicon. Of course, a mix of both of these approaches is possible, but neither is ideal.
Could the Missing Model Syndrome be the reason why stand-alone Virtual Platform tools have yet to be able to Cross the Chasm to mainstream users?
One of the nice things about emulation is that when you visit a company interested in using emulation there is never a question about the input that goes into the emulator. The input to emulation is RTL code, and every company has it since it's the starting point for chip implementation. Virtual Platforms are starting to benefit in the same way due to growth in High Level Synthesis tools like C-to-Silicon.
Is the lack of a connection between Virtual Platform creation and hardware implementation a factor in the Missing Model Syndrome?
Certainly, models (and a fast simulator) are a necessary, but not enough to provide the benefits needed to improve software quality. I really try to control my reaction when I hear people say that a free simulator is good enough, but it's not easy.
Simulation is only the base upon which to build features to enable engineers to do the tasks they need to do in a shorter time with improved quality. I have have said many times, step one is to run, step two is to debug (because software never works the first time), and only on step three do you really get to what you wanted to do in the first place, to verify software quality and tune system performance. Virtual Platforms provide value because they are capable of things that are very hard to do with real hardware. Some examples include:
I'm sure there are many more you can think of. My most interesting Virtual Platform discussions are those that get beyond talking about models and simulators and cover the real issues facing engineers trying to get systems working in a shorter time and with higher quality.
Spot on Jason
The value obatined from using a virtual platform greatly surpasses the cost of modeling it. Companies need to get over this barrier as well as start thinking about the results they would get rather than just the enablement. Real ROI calculation adn experience quickly demonstrate this fact.
Virtual Platforms have a wide usage covering architecture, verification, software development, customer enablement, ... To clearly calculate this ROI, companies must think of virtual platform as an infrastructure serving all these use cases with +/- 20% change to a foundation model. Today too many companies focus on a single subset of a use model, thus misquantifying the true return of using virtual platforms.
The additional challenge is the maintainence of all these models. In the old days, you could get away with only having RTL models. Now we need RTL models and higher level models. These higher level models are a bit ad-hoc and not standardized.
Some may use Matlab (M), C++, C, systemC, etc. for these higher level models. Interoperability of these models is a problem. Plugging in in different models and mixing abstractions (e.g. part systemC, part RTL, etc.) is also a problem.
Another aspect is the groups that develop the models. Often a systems group developes high level models to gain insight into performance, power, cost, size, etc. These models are often not detailed enough to be of much use to the implentation group, so they will re-code models, primarily in RTL.
In Summary virtual prototyping won't be mainstream unti there is a fully integrated flow from high level models to implementable models. As you hinted at; High Level Synthesis starts to ease this burdon by allowing an implementable flow from high level models. In other words, rather than fixing the problem of inter-operability of disparate models, its better to settle on a higher level of abstraction for implementation, so we can go back to having just one model to maintain.
That is, once we have synthesis flows from virtual platform to implementation, the virtual platform (and all other hight level models) will be disconnected from the downstream implementation, and will continue to be a maintainence problem and avoided whenever possible. --yah think?
This is a great summary of VP benefits, Jason, and I hope that it will be anonymously posted by coffee machines all around the electronics industry. I'm bookmarking it! :)
But ... the problem when one makes the decision to pursue their next design, the Missing Model Syndrome comes back. Part of the answer is ESL, in hand with HLS as you point out, when your high abstraction model of a new IP is done. Then there is IP reuse, TLM and/or RTL, maybe with an assist if an IP-XACT XML file(s) comes along with the individual part or library.