• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Custom IC Design
  3. Incorrect parasitic capacitance Calibre-QRC Flow

Stats

  • Locked Locked
  • Replies 2
  • Subscribers 126
  • Views 9145
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Incorrect parasitic capacitance Calibre-QRC Flow

ARoxas
ARoxas over 3 years ago

Hi, I'm currently doing some testing on running QRC extraction with Calibre LVS input flow. Followed all available information on Quantus (Callibre) flow in support.cadence. I was able to extract the "av_generated" view of the test block I'm extracting, however, the parasitic capacitance extracted seems incorrect because it is so few and only on certain nodes (see below sample screenshot). I'm doing a C-only extraction. I tried also RC-extraction, tried changing settings in the form e.g. decoupling to coupling but keep getting only a number of capacitors in the extracted netlist (16~20).  

The qrc log has warnings almost same with what is described in this thread https://community.cadence.com/cadence_technology_forums/f/custom-ic-design/33781/not-getting-parasitic-information-inside-the-extracted-netlist-created-using-pvs-qrc



Hope to get some idea on how to approach this issue. 
Thanks. 

  • Cancel
Parents
  • drdanmc
    drdanmc over 2 years ago

    As Andrew said, you may wish to contact tech support.  But the big picture on how QRC tech files are developed is this.

    1. You start with a physical stackup for your process.  This defines the dielectric and conductor layers, thicknesses, effects like spacing and width on resistance, etc.  This is fed to the techgen program in "-simulation" mode.  The simulation runs a field solver on a huge bunch of cases and stores the results.  The run time can be quite long.  For older, say 180nm, processes, this isn't too bad.  As you shrink the process to anything close to modern, the input file becomes quite complex as more and more effects become significant.
    2. Next you need to map layers that are generated and saved during the LVS process to the physical layers in your process along with any blocking.  For example it might be METAL2 in your LVS layers but M2 in the physical stackup.  There are a lot of considerations in getting this right that have to do with what is considered interconnect and what is considered part of a device.  But once this is all figured out you run techgen again but in "-compilation" mode to produce the actual techfile that an end user can use.  

    The reason these two steps are broken up are the -simulation step only cares about the actual process and not the LVS flow and it may take a lot of compute time to complete.  The second step is much more in the domain of the EDA engineer writing the LVS deck.  The -compilation step runs much much faster and so if things change in the LVS deck you end up recompiling.  Or if you happened to have different layer names in Calibre LVS output versus PVS LVS output, you could compile for each without the cost of re-running -simulation.

    So when you're missing a lot of capacitors and maybe the deck hasn't been proven yet, I suspect an incorrect layer mapping.  But the other even less involved check is that many times a foundry will have an LVS flag/switch to enable LVS for parasitic extraction and it turns on different features in the LVS deck that are not needed for LVS but are needed to support later parasitic extraction.  So if you had a flow that has been proven for other users but isn't working for some users, that is a big suspect.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Reply
  • drdanmc
    drdanmc over 2 years ago

    As Andrew said, you may wish to contact tech support.  But the big picture on how QRC tech files are developed is this.

    1. You start with a physical stackup for your process.  This defines the dielectric and conductor layers, thicknesses, effects like spacing and width on resistance, etc.  This is fed to the techgen program in "-simulation" mode.  The simulation runs a field solver on a huge bunch of cases and stores the results.  The run time can be quite long.  For older, say 180nm, processes, this isn't too bad.  As you shrink the process to anything close to modern, the input file becomes quite complex as more and more effects become significant.
    2. Next you need to map layers that are generated and saved during the LVS process to the physical layers in your process along with any blocking.  For example it might be METAL2 in your LVS layers but M2 in the physical stackup.  There are a lot of considerations in getting this right that have to do with what is considered interconnect and what is considered part of a device.  But once this is all figured out you run techgen again but in "-compilation" mode to produce the actual techfile that an end user can use.  

    The reason these two steps are broken up are the -simulation step only cares about the actual process and not the LVS flow and it may take a lot of compute time to complete.  The second step is much more in the domain of the EDA engineer writing the LVS deck.  The -compilation step runs much much faster and so if things change in the LVS deck you end up recompiling.  Or if you happened to have different layer names in Calibre LVS output versus PVS LVS output, you could compile for each without the cost of re-running -simulation.

    So when you're missing a lot of capacitors and maybe the deck hasn't been proven yet, I suspect an incorrect layer mapping.  But the other even less involved check is that many times a foundry will have an LVS flag/switch to enable LVS for parasitic extraction and it turns on different features in the LVS deck that are not needed for LVS but are needed to support later parasitic extraction.  So if you had a flow that has been proven for other users but isn't working for some users, that is a big suspect.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Children
No Data

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information