Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Sometimes Virtual Platforms model systems with large amounts of memory. Many embedded systems have a gigabyte or more of SDRAM. For example, one of the Xilinx Zynq boards, known as ZC702, has a Linux Device Tree source file defining the memory size as 0x40000000, or 1 Gb. Thinking about a SystemC model with a memory size of 1 Gb is a little troubling since it immediately triggers thoughts of a simulation footprint larger than 1 Gb, unless something more complex than a simple array of bytes is used for the memory model.
Considering large memories triggers thoughts of the old days when things like sparse memory models were used in Verilog simulation. I remember back in 1996 or 1997 using something called "damem" that was part of Verilog-XL. It was a set of Verilog system tasks for modeling memory that did not allocate the memory until it was used. This was a convenient way to have large memories when much of the memory was not used in a particular simulation. I checked today and damem is still provided in the current Incisive release. Ordinary Verilog now has (or probably has had for a long time) a sparse pragma to do the same thing:
reg [31:0] /*sparse*/ mem [0:3000000];
Let's get back to SystemC and TLM-2.0 Virtual Platforms to see how to deal with large memories. The Cadence Virtual System Platform provides an example memory model named simple_memory.h which is just like the name says, simple. It's very easy to use for various types of memory in a typical Virtual Platform. The interface looks like this:
Usage in a platform may look like this:
simple_memory<MYBUSWIDTH> *program_memory = new simple_memory<MYBUSWIDTH> ("program_memory", 0x8000000L, 0x00, do_wait, do_dmi);
The memory is implemented as a simple array of bytes:
m_mem_array = new unsigned char[m_size];
If an initial value is provided the array is initialized to the provided value using memset() and if no initial value is provided the array is initialized to a default value of 0:
std::memset(m_mem_array, init_val_, m_size);
Using such a memory model for a 1 Gb memory will result in a simulation which uses a lot of memory. Below is the screenshot of top when the simple_memory is used with a size of 1 Gb in the Zynq Virtual Platform.
Of course when Linux is running it will not use all of the memory -- so immediately ideas of a sparse memory model, or a model that breaks up the full memory into chunks or pages and only allocates memory when needed, comes to mind. There are other ways to conserve memory usage such as not allocating a page when it is read from before any writes occur, since the default value can always be returned. All of this requires a more complex memory model.
There is a quick-and-dirty solution that avoids having to write a more complex memory model. The key is to not initialize the memory at all. In C++ the new operator allocates the memory, but the memory will not become resident until it is actually used. It becomes part of the processes virtual memory, but is not swapped into physical memory until used.
This is the beauty of virtual memory and demand paging. By simply skipping the memset() the amount of memory actually used by the process is much lower, because Linux doesn't actually use all of the memory for many applications. Of course, some applications could use all of the memory, but in that case a sparse memory model won't help either. Below is the screen shot of top without calling memset() for a 1 Gb memory model.
Read the descriptions of VIRT and RES on the top main page. I found one page that even stated this difference is a common interview question.
By adding another constructor parameter to indicate to skip the initial value of the memory and bypass memset(), we have seen a way to use less memory in TLM-2.0 Virtual Platform simulations. I'm sure readers have much more experience with different types of memory models and techniques used to conserve memory, or maybe memory is so cheap nobody cares about memory size anymore.
Reader thoughts are always welcome.
Hi Jason ,
Thanks for the reply .
My requirements is to model really large memories of several GBs .
As this article is titled as "Modeling Large Memories in SystemC" I thought it would be useful .
But again the terms 'Large' is relative ...
Also I am on corporate network so I don't have any control on amount of memory installed on servers or amount of swap space allocated .
So I am back to my old techniques ;-)
Anup, you are probably running out of virtual memory. Use the Linux free command to see how much memory and swap you have and compare to the size of memory you are trying to allocate. If your swap space is pretty small you will have trouble. My example was for 1 Gb of modeled memory and it will work on a machine with as little as 2 Gb RAM and 2 Gb swap space. I think most machines should have 4 or 8 Gb of RAM and an equal amount of swap space.
I tried to use this technique.
I get std::bad_alloc exception as soon as I try to allocate the memory as follows
m_mem_array = new unsigned char[m_size];
Because the memory size which I am asking to allocate to OS is too huge .
so Os throws exception for that .
I wonder how it worked for the author , may be it works for very small amount of memory .
Good article. The memset trick was interesting one. No one would think from the host's MMU and memory management techniques while implementing the model. Invariably, memset would creep in so as to ensure determinism. However, one could always return a fixed pattern when a memory is allocated, but not written to. Thanks for the insight.