• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Custom IC Design
  3. Ocean distributed processing memory usage

Stats

  • Locked Locked
  • Replies 2
  • Subscribers 125
  • Views 13605
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Ocean distributed processing memory usage

TjaartOpperman
TjaartOpperman over 14 years ago
I am using IC6.1.4.500.1 and spectre 7.1.0.031. I am using the paramRun function to do distributed processing on 4 machines in blocking mode using an Ocean script. The paramRun only submits 16 jobs at a time to these 4 machines. When they are complete, the script analyses these waveforms using a for loop and then dumps the results into a text file. The script reads the data using the selectResults function and only writes around 100bytes or so per job. A for loop continuously submits 16 jobs at a time in blocking mode after changing a few variables using the desVar and paramAnalysis statements. The script extracts the waveforms using the famValue function. The spectre.out file shows that each job uses 14.7MB of virtual memory. After a total of 273 Jobs have been submitted the host machine runs out of memory in 32-bit mode. 273*14.7=4013MB. I am expecting the script to use a maximum of 14.7MBX16 of virtual memory at a time, since the variables used to store the waveforms are overwritten. Is there perhaps a way to free up the simulation results from the virtual memory before submitting a new batch of distributed jobs? I am aware that 64-bit mode will provide access to more memory, but in the long run such a solution will not work for me.
  • Cancel
  • Andrew Beckett
    Andrew Beckett over 14 years ago

    It's unlikely that the amount of memory reported by spectre in the spectre.out is relevant here. What's likely to be the limiting factor is the memory taken by the waveforms that are being read into memory after each simulation and are used to create your "100 byte" summary of the run. 

    Most likely the garbage collection of the waveform point data is not being triggered, and hence the memory used by the waveforms is not being reused - until too late. This happened because what was being monitored was the waveform and vector objects, not the point data itself, and whilst the memory for the points themselves was reclaimed, it was only triggered if you'd used up all the available waveform/vector slots. This has been significantly improved in IC615 - so if you can use IC615 that would be a good solution.

    Unfortunately there's no easy (general purpose) solution in IC614.

    Regards,

    Andrew.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • TjaartOpperman
    TjaartOpperman over 13 years ago
    Andrew, As a workaround in IC614 my script now builds another skill script with all the variables set up for a distributed parametric run. I then execute that script from the Virtuoso session by spawning another ocean session in the shell using the sh() command. The memory gets freed this way, but designing the script in this way is a very tedious process. The script also looks quite clumsy and is difficult to read, but it works. Regards, Tjaart
    • Cancel
    • Vote Up 0 Vote Down
    • Cancel

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information