• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Verification
  3. Modeling Large Memories in SystemC
jasona
jasona

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
zynq
Memory
virtual platforms
TLM
virtual prototypes
SDRAM
Verilog
SystemC memories
SystemC
memory models
modeling memories
linux

Modeling Large Memories in SystemC

13 Apr 2012 • 3 minute read

Sometimes Virtual Platforms model systems with large amounts of memory. Many embedded systems have a gigabyte or more of SDRAM. For example, one of the Xilinx Zynq boards, known as ZC702, has a Linux Device Tree source file defining the memory size as 0x40000000, or 1 Gb. Thinking about a SystemC model with a memory size of 1 Gb is a little troubling since it immediately triggers thoughts of a simulation footprint larger than 1 Gb, unless something more complex than a simple array of bytes is used for the memory model.

Considering large memories triggers thoughts of the old days when things like sparse memory models were used in Verilog simulation. I remember back in 1996 or 1997 using something called "damem" that was part of Verilog-XL. It was a set of Verilog system tasks for modeling memory that did not allocate the memory until it was used. This was a convenient way to have large memories when much of the memory was not used in a particular simulation. I checked today and damem is still provided in the current Incisive release. Ordinary Verilog now has (or probably has had for a long time) a sparse pragma to do the same thing:

    reg [31:0] /*sparse*/ mem [0:3000000];

Let's get back to SystemC and TLM-2.0 Virtual Platforms to see how to deal with large memories. The Cadence Virtual System Platform provides an example memory model named simple_memory.h which is just like the name says, simple. It's very easy to use for various types of memory in a typical Virtual Platform. The interface looks like this:

    static sc_time LATENCY(10, SC_NS);
    static sc_time DMI_LATENCY(2.5, SC_NS);
 
    simple_memory( sc_module_name name_,
                   unsigned size_,
                   unsigned char init_val = 0x0,
                   bool do_wait_ = 0,
                   bool do_dmi_ = 1,
                   sc_time& rdelay_ = LATENCY,
                   sc_time& wdelay_ = LATENCY,
                   sc_time& dmi_rdelay_ = DMI_LATENCY,
                   sc_time& dmi_wdelay_ = DMI_LATENCY );

Usage in a platform may look like this:

    #define MYBUSWIDTH 32
    bool do_dmi = 1;
    bool do_wait = 0;

    simple_memory<MYBUSWIDTH> *program_memory = new simple_memory<MYBUSWIDTH> ("program_memory", 0x8000000L, 0x00, do_wait, do_dmi);

The memory is implemented as a simple array of bytes:

    m_mem_array = new unsigned char[m_size];

If an initial value is provided the array is initialized to the provided value using memset() and if no initial value is provided the array is initialized to a default value of 0:

    std::memset(m_mem_array, init_val_, m_size);

Using such a memory model for a 1 Gb memory will result in a simulation which uses a lot of memory. Below is the screenshot of top when the simple_memory is used with a size of 1 Gb in the Zynq Virtual Platform.

Of course when Linux is running it will not use all of the memory -- so immediately ideas of a sparse memory model, or a model that breaks up the full memory into chunks or pages and only allocates memory when needed, comes to mind. There are other ways to conserve memory usage such as not allocating a page when it is read from before any writes occur, since the default value can always be returned. All of this requires a more complex memory model.

There is a quick-and-dirty solution that avoids having to write a more complex memory model. The key is to not initialize the memory at all. In C++ the new operator allocates the memory, but the memory will not become resident until it is actually used. It becomes part of the processes virtual memory, but is not swapped into physical memory until used.

This is the beauty of virtual memory and demand paging. By simply skipping the memset() the amount of memory actually used by the process is much lower, because Linux doesn't actually use all of the memory for many applications. Of course, some applications could use all of the memory, but in that case a sparse memory model won't help either. Below is the screen shot of top without calling memset() for a 1 Gb memory model.

Read the descriptions of VIRT and RES on the top main page. I found one page that even stated this difference is a common interview question.

By adding another constructor parameter to indicate to skip the initial value of the memory and bypass memset(), we have seen a way to use less memory in TLM-2.0 Virtual Platform simulations. I'm sure readers have much more experience with different types of memory models and techniques used to conserve memory, or maybe memory is so cheap nobody cares about memory size anymore.

Reader thoughts are always welcome.

Jason Andrews

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information