• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Custom IC SKILL
  3. OverUnder on large polygons

Stats

  • Locked Locked
  • Replies 5
  • Subscribers 144
  • Views 15259
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

OverUnder on large polygons

lkphoto
lkphoto over 11 years ago
Hi Andrew,

I have problems in performing the “OverUnder” operation (=first using dbLayerSize with a positive amount, then with a twice the same negative amount, then with the positive amount again) caused by the fact that polygons cannot have more than 4000 points in SKILL. The purpose of the OverUnder operation is to remove acute angles, too small minimum spacings, and too small features during data preparation. These would create DRC errors.

I am designing photonic components, therefore the typical size of my cells is 10um x 10um or larger, moreover I have to use orthogonal polygons, and the curves are snapped to a 1nm grid.

I have a routine that writes such large polygons as a sum of subpolygons, each having less than 4000 points.

When I apply the dbLayerSize function to all the subpolygons at the same time, Cadence crashes –I assume because all the subpolygons together have more than 4000 points.

I ended up therefore by applying the dbLayerSize operation to single pairs of neighboring subpolygon, one at the time. The problem is that the result of the OverUnder operation depends on the way the big original shape is split into subpolygons. This is clearly not a clean solution, and I discarded it. Also, it is quite slow (taking few minutes for a single layer).

As a second attempt, I tiled the big polygon (=union of the many subpolygons) using square tiles (I used to this end the dbLayerAndNot) and applied OverUnder to each of them separately. Once this was done, I repeated the operation with the dual square lattice obtained by translating the tiles by half the tile pitch in both x and y directions. The result is predictable (as far as the tile size is sufficient), but the algorithm is extremely slow because of the dbLayerAndNot operation.

I wonder if there is a known solution to this problem.

For example, is there a way to get rid of the 4000 vertices per polygon limitation?

It would be great if I handn’t to go outside the Skill environment (e.g. by using Calibre), since these pCells see a number of other post-processing steps in Skill.

Thanks a lot in advance,
  • Cancel
Parents
  • tweeks
    tweeks over 11 years ago

    lkphoto said:
    Hi Andrew,

    Gee, can I reply too? :P  Maybe we should rename this the "Ask Andrew Forum".

    I'm going to assume that, if your message was really intended only for Andrew, you would have contacted him by email or the Cadence community private message feature.

    lkphoto said:

    When I apply the dbLayerSize function to all the subpolygons at the same time, Cadence crashes –I assume because all the subpolygons together have more than 4000 points.

    Please report this to your AE so that the bug gets fixed.

    lkphoto said:

    For example, is there a way to get rid of the 4000 vertices per polygon limitation?

    Have you tried using point arrays?

    We have:

     

    dbCompressPointArray
    dbPointArrayAnd 
    dbPointArrayAndNot 
    dbPointArrayOr 
    dbPointArraySize
    dbPointArrayXor
    dbTransformPointArray 
    geCompressPointArray
    

    lkphoto said:

    It would be great if I handn’t to go outside the Skill environment (e.g. by using Calibre), since these pCells see a number of other post-processing steps in Skill.

    Could you stream out, run Calibre, and stream back in to SKILL? Or run Calibre on the OA database directly using sh() or ipcBeginProcess()? And then there's Assura... :)

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Reply
  • tweeks
    tweeks over 11 years ago

    lkphoto said:
    Hi Andrew,

    Gee, can I reply too? :P  Maybe we should rename this the "Ask Andrew Forum".

    I'm going to assume that, if your message was really intended only for Andrew, you would have contacted him by email or the Cadence community private message feature.

    lkphoto said:

    When I apply the dbLayerSize function to all the subpolygons at the same time, Cadence crashes –I assume because all the subpolygons together have more than 4000 points.

    Please report this to your AE so that the bug gets fixed.

    lkphoto said:

    For example, is there a way to get rid of the 4000 vertices per polygon limitation?

    Have you tried using point arrays?

    We have:

     

    dbCompressPointArray
    dbPointArrayAnd 
    dbPointArrayAndNot 
    dbPointArrayOr 
    dbPointArraySize
    dbPointArrayXor
    dbTransformPointArray 
    geCompressPointArray
    

    lkphoto said:

    It would be great if I handn’t to go outside the Skill environment (e.g. by using Calibre), since these pCells see a number of other post-processing steps in Skill.

    Could you stream out, run Calibre, and stream back in to SKILL? Or run Calibre on the OA database directly using sh() or ipcBeginProcess()? And then there's Assura... :)

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Children
No Data

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information