Get email delivery of the Cadence blog featured here
My first comment is that I agree with Jack's conclusions. Because C-to-Silicon
Compiler's high-level synthesis can transform the same design description in different ways for
different applications, SystemC IP is inherently more reusable that RTL IP. I
also agree that SystemC can deliver a significantly higher level of abstraction
than RTL. Sure, it's possible to write a SystemC design description that's
nothing more than RTL in another language, but with training designers can
learn how to write untimed, high-level code that enables boosts in abstraction
Given the advantages of SystemC-based design, why is it not yet universally
adopted? I believe that it's valid to draw a comparison with the rise of
RTL-based design in the early 90s. I was a pioneer in that transition, taping
out in 1989 what I believe was only the second chip anywhere using a commercial
logic-synthesis tool. RTL for simulation and modeling had been around for a
number of years previously, but the availability of logic synthesis was the key
driver for RTL replacing gate-level schematics for design input.
There were other enabling technologies, including RTL-to-gates equivalence
checking, RTL-based design rule checkers, and the availability of commercial
RTL design IP. Being able to license proven design IP for a wide array of
standard interfaces was a good reason to move to RTL if not already there. Even
"star IP" providers such as ARM began offering RTL versions of their cores. My
second main point, and the complement to Jack's title, is that the availability
of SystemC design IP will be a strong incentive for designers to move up from RTL.
I say "will be" because I don't see a lot of SystemC design IP out there
yet. I searched the ChipEstimate
site for the keyword "SystemC" and found only a half-dozen listings, several of
which appear to be RTL designs with SystemC simulation models. I have little
doubt that this will change; logic synthesis was around for several years
before the RTL-based IP industry made a significant impact. I expect a similar
"chicken and egg" effect with the adoption of C-to-Silicon
Compiler and the availability of SystemC design IP.
My final topic is what the transition from RTL to SystemC design means for
my world of functional verification. Today, many SystemC designers perform the
bulk of their verification at the RTL level, using the output of high-level
synthesis. Again, there is a clear comparison with the early days of RTL design,
when designers still ran lots of gate-level simulation. This changed over time,
and likewise I expect that verification will become more and more centered on
the SystemC design description.
Cadence has done a lot of work to ensure that this transition will be
painless for our customers, including:
I'll defer to my colleague Jack to forecast the industry's move from RTL to
SystemC design in more detail, but it's clear to me that this is happening and
that it has a lot in common with the gates-to-RTL transition. EDA vendors
worked hard to ensure an easy path for their customers back then, and we're
equally committed to evolving our tools and methodologies today for customer
success. I'd love to hear from you about SystemC-based design. Are you doing
it? Considering it? If not, why not? What can we do to help? I look forward to
Good questions Sandeep. High-level synthesis tools, such as C-to-Silicon Compiler raise the abstraction of your input code, so you don't need to worry about where to insert flops. C-to-Silicon will then analyze all paths to maximize the available time, and squeeze as much logic as possible into each path. C-to-Silicon does this analysis by using the Cadence RTL Compiler logic synthesis tool under the hood, so it can accurately time each path, and ensure that the generated RTL will predictably close timing.
Pipelining is another big advantage in using C-to-Silicon since you don't need to hardcode any of the pipeline structures in the input SystemC; just focus on the algorithm and let C-to-Silicon implement it in a varying number of pipeline stages to compare overall area, latency and throughput. Then if you want to increase the clock frequency, simply re-run the pipeline command to achieve a different number of stages to meet timing.
Regarding DDR memories, yes C-to-Silicon supports any memories that memories compilers generate, and does not care about internal memory implementations.
I would like to know more about how the untimed SystemC model will be converted to Netlist ? More specifically, where would the HLS tool put flops so that the timing is met ? How about advanced timing features like pipelining ? How about supporting DDR ?