Get email delivery of the Cadence blog featured here
technologies are of no value unless there's a coherent, workable methodology
that supports them. SystemC transaction-level modeling (TLM) has lacked a
methodology that goes all the way to silicon without major gaps. Independent
verification consultant Brian Bailey filled in some of those gaps at SystemC Day at the
DVCon conference Feb. 22.
Brian spoke at the
first half of SystemC Day, which was the North American SystemC User Group (NASCUG) 12th
meeting. Other morning talks included an overview by Open SystemC Initiative (OSCI) chair Eric Lish and a keynote by
analyst Gary Smith. The second half of SystemC Day included a DVCon tutorial
on the proposed OSCI SystemC synthesis subset, taught by Michael
McNamara of Cadence, Michael
Meredith of Forte, and John Aynsley of Doulos.
As he began his talk,
Brian noted that his presentation is based on consulting work he's recently
done with Cadence. It's a "work in progress," he said, but he outlined some
major characteristics of a TLM-driven design and verification methodology,
including the points below.
Design and verification must work in tandem
traditional "V" shaped flow in which design and verification proceed
separately, and come together only at the end. Verification needs to occur
every time a transformation is made in a model. "We must verify as we make
decisions rather than leaving them until some point much later in the flow,"
Separation of concerns simplifies modeling and verification
computation should be treated as independent concerns. This way, both
communications and computation blocks can be reused without being dependent on
one another. If you take computation blocks and connect them together without
communications, you create a protocol-agnostic virtual platform. "You want a
methodology that allows you to pick any two things and put them together in a
way that doesn't require re-verification," Brian said.
and architecture should be treated separately as well. "We don't want to
pollute our functional description with how we're going to implement it."
Working with multiple abstraction levels
TLM-driven design and
verification will occur in a "multi-abstraction" environment. The best approach
is to start from the top with algorithm design and verification, go through the
loop to complete that process, and move down to architectural verification,
while reusing as much from the algorithmic level as possible. The next step is
to move from architectural to micro-architectural verification, again reusing
as much as possible.
Adapters and interfaces
The design process
will likely start with C/C++ algorithmic models. To move to architectural
models that can be used in virtual platforms, it will be necessary to define
hardware/software boundaries, registers, and concurrency. This is where SystemC
comes in. As model transformations proceed, there will be a need for "adapters,"
which are simple routines that handle such concerns as rate matching and buffer
The flow that Brian
outlined relies heavily on high-level synthesis. Since the OSCI TLM 1.0 and 2.0
standards are not synthesizable, the flow uses an interface based on OSCI TLM
1.0 that does not strictly adhere to the standard. For example, it leaves out
simulation features that are not synthesizable, and adds missing features
needed for synthesis such as reset. It also brings in "generic payload"
capabilities from OSCI TLM 2.0. (A blog
I wrote last year talks more about TLM 1.0 versus 2.0).
What about third-party TLM IP?
Actually, this wasn't
part of Brian's presentation, although in response to a question he said that
defining a synthesizable SystemC subset will help enable the IP industry. It
seems to me that the availability of TLM IP is the next big question, but
that's a topic for another blog.
the NASCUG meeting will be available at the NASCUG
web site. Meanwhile, for a Cadence perspective on SystemC, see the video
interview with Steve Svoboda in the SystemC
Day blog by Joe Hupcey III.