Today's post is less technical and a bit more theoretical, but I promise that my next post will be more hands-on.
As somebody working on virtual platforms in an EDA company, I regularly spend time talking to firmware and embedded software engineers with many different backgrounds. Every so often one of them asks, "Why SystemC?" Some software engineers look at SystemC and decide that it looks like a real mess. They mention things like:
They ask, "what is wrong with plain C++ or even plain old C?" Most of the time they ask these questions because they don't understand how everything fits together.
Even though I know how to answer all the questions, in the back of my mind it does cause me to wonder if SystemC will ever become so mainstream as a way to model hardware for the purposes of running software that eventually people will not ask these questions. Today, QEMU is the most popular solution for simulating hardware and running software. It is used by many Software Development Kits (SDKs) such as Android, MeeGo, and Linaro. QEMU uses nothing more than C to model hardware. It even compiles all of the hardware models for every virtual platform that it supports into a single executable so users can choose which hardware to simulate at runtime.
Modeling SoCs with SystemC
Just as I was pondering what it will take for SystemC to become more widely used, the latest issue of Embedded Systems Design magazine arrived. Even though the magazine is thinner than it once was, it's still one of the most popular for embedded systems and software engineers. The cover headline is, "Modeling an SoC with SystemC".
Although the article is primarily about using SystemC for architectural analysis, not for executing software, it does highlight the challenge that OMAP customers will develop a lot of different types of software, and standards are needed for TI to work more efficiently with its partners. The article also mentions that the virtual platform for software development was not created with SystemC; OMAP 2430 was popular in 2005, before SystemC TLM 2.0 was available and before SystemC became popular for virtual platform development.
After reading the cover article I turned to the first column on the inside cover titled, "C to Silicon, Really?". In Cadence, I work very close to the team that develops a product with this exact name, C-to-Silicon Compiler. Although it's not at the point where embedded software engineers magically turn software into hardware, using SystemC to model hardware and turn it into RTL code is growing very rapidly. In this domain the leading products are all based on SystemC. Those that didn't start with SystemC are moving fast to add it, and those that are not adding it may be fading away. Refer to Jack Erickson's article on SystemC and synthesis for more details.
Although the architectural analysis and high-level synthesis use cases are different from the virtual platform use case in terms of goals and types of models required, it's clear that standards are very important and SystemC simulation is a very important foundational technology; important enough to be covered in Embedded System Design magazine. There is also good methodology work going on to help everybody understand how to get the most code reuse when targeting multiple use cases.
Standards are Key
For all three applications, using an open, standards-based approach is key. EDA history has shown standards to be the best way to promote increased usage and thriving competition. If engineers spend time creating models, they want to use these models with different tools and in different ways. Time after time I continue to sense frustration with closed tools, black box models, and virtual platforms that cannot be easily reconfigured and changed.
Using standards like SystemC as opposed to C++ or C for simulation enables the creation of many tools that work automatically from the SystemC source code. Last year I wrote about SystemC debugging. Additional features like automatic transaction recording/logging, breakpoints based on TLM 2 generic payload values, ability to probe signals such as the CPU interrupt, ability to force signals at runtime, and the ability to automatically provide a programmers view for peripherals and memory are all easy do provide if the input model uses SystemC and TLM standards. It's not very easy to provide robust debugging and analysis features if hardware models are done using ad hoc C and C++ models, in fact, it's impossible.
I predict the process of creating virtual platforms will continue to get closer and closer to the actual system design process, and start using simulation infrastructure that serves multiple use cases such as the three that were mentioned here. Companies are no longer investing in stand-alone efforts to create virtual platforms, but are insisting that a single environment have multiple purposes including the ability to mix and match models of any abstraction (including RTL) as well as connect those models to other engines such as emulators and FPGA prototypes.
Although SystemC standards for virtual platform still have room for improvement, I believe this will naturally occur as system companies start to demand flexible virtual platforms that allow them to get models from different sources and add, subtract, and modify models to match the system they build. Gone will be the days when a fixed reference board or a fixed virtual platform will be enough to pass to systems company at the start of a new project.
Although Cadence has been working in SystemC simulation and synthesis for more than 10 years and has invested hundreds of person years in development, I believe the best is yet to come.
I don't think SystemC is a mess, I personally just believe that SystemC lacks a clear/defined/shared methodology which makes its usage quite difficult to new comers ... and more than this the incredible lack of Modeling Engineers (SW guys with HW/System sensibility) is a really important issue to be rapidly solved to make ESL a success ...
SystemC is a mess for the hardware engineers mostly. In my 3+ years experience with SystemC, the main challenge was the C++ nature of the platform for the RTL developers.
There are multiple attempts from the EDA vendors to help bridge the gap by creating UML to SystemC translators. Another huge challenge of modeling in general is to show positive ROI at the beginning of the process. Next in line is the challenge of obtaining and sharing the models, but this is semi-successfully solved by TLM2.
SystemC is a mess. It uses bad abstractions/semantics on top of a shared-memory paradigm that makes it difficult to parallel process and a threading model that stops it modeling things in detail. So it's not good for software engineers or low-level hardware verification (gates,power management/analog/RF).
Here's a better approach -
- compatible with SystemC, but with much broader reach. Open source so anyone can use it.
Thanks for commenting. There are many instances of inferior technology gaining widespread adoption because of standardization and interoperability. My observation is that many companies are not interested to adopt stand-alone tools or languages that are not based on open standards, are not available from multiple vendors, and don't connect to anything else they do in the system design process (even if there are warts).
This means it would be very hard to replace SystemC, even if you have something better.
* SystemC has complex classes built with C++
That's because SystemC is not well designed. A good design makes complex things simple, a bad design makes simple things complex.
* It uses strange macros like SC_MODULE, SC_METHOD, and SC_HAS_PROCESS
These also demonstrate the design problems with SystemC. There is absolutely no need in those ugly macros, and no surprise here that GBL has none of those.