Get email delivery of the Cadence blog featured here
High-level synthesis (HLS) was originally used by consumer products companies who wanted to get to RTL really, really fast. But now its appeal is broadening, and HLS is serving other important purposes, including architectural exploration, faster verification, and IP reuse.
First, let’s define HLS. Recently I asked Mike (“Mac”) MacNamara, general manager of the Cadence System Software group, what distinguishes HLS. His answer: HLS performs scheduling. RTL synthesis requires explicit scheduling, but a high-level synthesis tool can move operations around. As such, HLS uses “high level” descriptions, usually SystemC and/or ANSI C.
Here are some of the uses that design teams are discovering for HLS:
If the video fails to play please click here.
In a recent Deepchip review of the Cadence C-to-Silicon Compiler, Gernot Koch of Micronas wrote that “we believe SystemC tools like CtoS [C-to-Silicon Compiler] will become mainstream not because the tools are better than RTL designers, but because they allow you to explore more architectural choices in a shorter time than hand-writing RTL.” ESL “signoff” for companies that outsource IC development Many systems companies – Nokia is one recent example -- have outsourced their IC development to someone else. The RTL design team is gone, but systems companies still need their semiconductor providers to design chips. With HLS, systems companies can do some high-level exploration to determine what will effectively run their applications. They can then convey this information to their chip design suppliers. We may be heading toward an era of ESL signoff, or even “software signoff,” as Paul McLellan described in his April 8 EDA Graffiti blog.
Faster verification and fewer bugsThe Micronas posting was a response to a survey that asked engineers to cite reasons for using high-level synthesis. The number one reason is what you would expect – “faster time to RTL.” The second reason was “faster verification time,” and the fourth reason was “fewer bugs.” So why is faster verification so high on the list?
The reason is that a C language description runs several orders of magnitude faster than RTL. You may have some legacy RTL code to include in the verification, but if most of your modeling is in C, you can run a lot more cycles. And there will probably be fewer bugs because a C program contains far fewer lines of code than RTL.
IP reuseOne choice that was notably missing from the survey mentioned above is IP reuse. The advantage here is fairly obvious. If HLS comes into widespread use, IP can be created and shared between companies at a higher level of abstraction. Protocol IP, for example, can be written in C or SystemC and quickly compiled to RTL when needed. Of course, the interoperability of this high-level IP will become crucial, hence the importance of standards such as the Open SystemC Initiative (OSCI) TLM-2.
Implementing algorithms in FPGAsFPGAs have attracted the attention of software developers who are looking for a way to run algorithms quickly. The trouble is, these software developers are not RTL designers and never will be. C language synthesis is a natural way to implement algorithms effectively in FPGAs. For this reason, HLS is no longer aimed solely at ASICs. The C-to-Silicon Compiler, for example, has added support for Altera and Xilinx FPGAs.
Future direction: Low-power designMuch has been written about the need to bring low-power IC design to a higher level of abstraction, and power-aware HLS can provide a way to do that by offering very early power estimates. In the following video clip, Mac describes how C-to-Silicon Compiler tracks power information:
If the video fails to play please click here.
HLS is not for everybody and everything. But advocates believe that most digital designs will ultimately be done with HLS. In this scenario, HDLs will become the assembly languages of the future, and the IC design flow and the EDA industry will be very different from what we see today.
The following three things are different from the days of Behavioral Compiler:
1) Language -- previous behavioral synthesis products used Verilog (or VHDL). Algorithms start out in C, or Matlab. Translating to Verilog or VHDL immediately loses one of the primary benefits of high-level synthesis. However, as lots of people have demonstrated, you can't use C directly and get decent results -- you have to add information. SystemC is an acceptably short step from plain old C to something from which a good implementation can be generated.
2) Verification -- the original behavioral synthesis products required that a testbench used for the high-level code be modified to accommodate the generated RTL, due to timing-dependent interfaces. With SystemC, you can write interface code that is cycle-accurate along with the untimed code that is typically the meat of the algorithm. If the synthesis tool is designed properly, then you can use the same testbench with no changes for both the high-level code and the low-level generated code. Without this capability (again something plain old C lacks), you have a very difficult verification environment, impairing another of the big benefits of high-level synthesis.
3) Quality of Results -- the original behavioral synthesis products weren't very good at producing RTL that met timing requirements. Since you can't modify the RTL easily, this was a real show-stopper, since there wasn't much you could do if you had a timing problem. For at least the HLS product that I know about, Cynthesizer, this problem has been solved. You can always produce RTL that will meet timing. You may not be able to meet the latency constraints, but you will know that up front, when you can do something about it. I will note that it took a fair amount of product maturation time to be able to make this claim.
When we started work on Cynthesizer 10 years(!) ago, we identified these three problems that needed to be solved in order for HLS to be broadly successful. In that time, these three issues have remained the key to the general usability of an HLS product. As a result of addressing these three issues, Cynthesizer has been used to produce chips composed of millions of gates in many shipping products in a wide range of applications and performance targets. That couldn't be said about the BC-era behavioral synthesis products.
Harry -- I wrote about the first generation of HLS (“behavioral”) synthesis tools back in the 1990s, and “what’s different now” is a question I’ve asked myself. Here are several quick perspectives. 1) We have a new generation of HLS tools based on C/SystemC, not VHDL or Verilog, and user reports such as the one cited in this blog suggest good quality of results. 2) Due to design complexity, there is a much more compelling need to move to a higher abstraction level in 2009 than there was 10-15 years ago. 3) As companies reduce or eliminate their own IC implementation teams, signoff will have to rise to higher levels, opening a potential new role for HLS.
I agree completely that more differentiation is going into embedded software, but there will still be many applications that require some custom chip design. Going forward, many will be started with some form of HLS. The availabilty of HLS might even convince some design teams who would otherwise go “all software” to try an FPGA or ASIC.
I don't understand what is new here. I was one of the first AEs at Synopsys supporting Behavioral Compiler over 10 years ago and the capabilities and messaging sounds the same:
- explore more architectures
- faster simulation time at the behavioral level
- system engineers doing design
- FPGAs for quick prototypes
So, what has changed in 10 years that makes this such a "game changer" today? In fact, I think it's more likely that designers will just compile their C-code to run on an embedded processor or GPU today rather than implement some custom hardware. And if they do have custome hardware, they are more likely to use something like Tensilica's Xtensa cores to optimize for speed and still maintain software reprogramability.
Please enlighten me.