Historical trends in languages
Many of us have traveled around the world, and while we can often communicate with local people in our own language, we realize it is best to communicate using the local language. It helps to "break the ice" if you at least try to use some of the local language, perhaps from a guide book. The moment you do it, barriers are removed, and you are more trusted. All this considered, it is still a significant handicap to use the wrong language for the task at hand, especially if you need to have a serious conversation. We see the same barrier (or need) as we look at design and verification languages.
In 1989, short time after I started my career at Daisy Systems, I enjoyed teaching customers VHDL as a new design language replacing schematic diagrams. Several years later, I witnessed the transition of the market to Verilog and eventually the migration to mixed-level design using both Verilog and VHDL. I saw the transition from OVL to PSL to SVA, as assertion languages, and eventually the support of the mix of these languages (the design, the verification, and the assertion).
In the next two blogs, I would like to discuss the directions in the languages that will be chosen for TLM (or ESL) design and verification addressing hardware developers. Obviously, we can't ignore the other target audience - the SW developers. Software developers have no "religious" preferences about hardware design languages. All they want is speed and early access to the hardware platform while they continue to work in their friendly development environment.
The new hardware design language - C/C++ vs. M vs. SystemC
As we at Cadence engage with customers using high-level synthesis (HLS), we see two camps: one that drives SystemC (with TLM) as the main design language and another that drives C/C++ as the main design language. And while other languages are being used at a higher level of abstraction (such as M-Matlab), I do not see them becoming part of the mainstream design flow in the near future since their output produces Quality of Results that are far from being ideal for implementation.
Cadence endorses the industry standard SystemC extension of C/C++ and drives this as the key design language for new IP development. Cadence also provides tools that let customers' model simple datapath functions in pure ANSI C or C++, and then automatically import or convert them into a complete SystemC environment. SystemC has many built-in capabilities for supporting hardware modeling at abstract level as well as standard TLM APIs for software virtual prototype. We see SystemC as the basis for an order of magnitude increase in productivity - one can model their entire hierarchical design (datapath, control logic, or complex bus protocols) in an industry-standard way to understand and validate concurrency and complex code, while using high-abstraction untimed C++ for most of the main functionality. One can synthesize the hardware design and get a virtual model from a single source code - eliminating today's huge risk where the software is designed to work on a virtual platform which operates similar to, but not the same as how the actual hardware platform operates. ("What you see is what you get!"). This new approach can combine three use models (algorithmic model, virtual prototyping model and architectural model that can be implemented) into one.
So where do we go from here. Will RTL disappear? Not likely.
RTL will continue to be part of the flow similar to gate-level today and we will see more designs that will create RTL automatically as an output rather than hand coded.
So, is it worth it? Why should we change? Our customers reporting 3-4X better productivity during the creation of their IP and 10X when they reuse or explore architectural trade-offs for their IP. As design cost is increasing and IP reuse becomes major part of development process, 10X productivity improvement can not be ignored by the executive management.
Do you agree? I would like to hear your opinion too. My next blog will be dedicated to TLM verification language.
You raised an excellent question. I think your concern is valid and there is a need to consider the implementation implications as you write your SystemC code with the intent to use it for implementation. In the early days of logic synthesis, designers spent a lot of time analyzing their schematic. As experience and confidence built up, they found much less need to do this and started to rely more on other tools. The transition to High-Level Synthesis will be similar and the deliveries haven't changed - the job of a designer is to get reliably into implementation however the job can be done much faster. With the links to synthesis and to high quality checks, Cadence integrated ESL flow addresses these issues in much easier way. Since RTL is just an intermediate step to the gate-level netlist, designers are quickly seeing that (as long as it meets spec,) there is no need to dwell on the RTL.
Cadence is in the best position to address these issues from the following reasons:
1. The developers of the Cadence C-to-Silicon Compiler knew this long ago and therefore embedded the logic synthesis engine and accurate technology libraries into the high-level synthesis tool for tight correlation and predictable time closure.
2. C-to-Silicon database provides cross links between all inputs and outputs (to ensure easy visibility to always drive design changes from top-down).
3. Cadence provides cross link between SystemC and RTL with side-by-side view for both design and verification.
4. Cadence provide Engineering Change Order capabilities for both C-to-Silicon and Conformal so if small changes are required they can be added incrementally.
5. C-to-Silicon Compiler methodology separates the design constraints from design functionality allows re-use of the same functional block in the future with different timing constraints, saving huge amount of time and resources.
6. Collaboration with Calypto allows customers to run Sequentual Equivalency checking between SystemC to RTL combined with Cadence Encounter Conformal alows customers to compare SystemC to gates.
Feedback from the many companies already taping out designs with C-to-Silicon shows that indeed HLS offers big productivity advantages, plus as they gain experience, its only getting better.
Cadence is committed to continue to work on this flow and improve it further in the future.
How is the feedback loop from, say, timing tools. Already it can be tricky to relate timing errors back to RTL. How much harder is it to figure out which C/C++/SystemC construct needs to be changed in order to break up a long timing path?
Higher level programming languages in software and abundant CPU cycles and RAM have virtually eliminated the need for most programmers to ever consider the machine code that will be produced by their high level code. In most cases it seems like RTL designers still have to be very aware of what kind of gates they are inferring. Will ESL designers have to think in terms of what RTL their ESL code is inferring, and then what gates that RTL leads to? In short, does ESL really simplify a designers life, or does it just add to the complication?