• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Functional Verification
  3. Is it possible to get a diff between two coverage databases...

Stats

  • Locked Locked
  • Replies 4
  • Subscribers 65
  • Views 16923
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Is it possible to get a diff between two coverage databases in IMC?

aefody
aefody over 5 years ago

I'm in the process of weeding a regression test list. I have a coverage database from the full regression list and would like to diff it with the coverage database from the new reduced regression test list. If possible I would than like to trace back any buckets covered with the full list, but not with the partial list, into the original tests that covered them.

Is that possible using IMC? if not, is it possible to do from Specman itself?

(Note that we're not using vManager)

Thanks,

Avidan

  • Cancel
  • StephenH
    StephenH over 5 years ago

    Hi Avidan.

    How did you generate the reduced test list? In theory the ranking in IMC should be "perfect" meaning you shouldn't lose any tests that add even one unique bin, so the reduced test list should still give exactly the same set of hit bins as the full regression.

    IMC does have a Tcl command "report_unique" that lets you compare two coverage databases to see what bins are unique to each one. Search on report_unique in support.cadence.com or jump to this FAQ for details.

    You would need a bit of scripting to figure out which of the full regression's tests were adding unique bins compared to the merged reduced regression database, but it could be done by simply iterating through each test in the full regression.

    Regards,

    Steve

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • aefody
    aefody over 5 years ago in reply to StephenH

    Our merging process seems to be two step: (1) coverage from tests is merged into two merged files (2) the two files are merged. Our ranking output shows something like below. Both first_file & second_file were apparently deleted in the process. Is there a way to rank the original tests? from the docs I believe not, as ranking either accepts the tests themselves, or you ask it to look at the previous merge, which in this case is useless. Correct?

    Using resultant model of previous merge ...
    Coverage ranking options:
    ============================================================================
    Rank options: -out rank_rpt.txt -use_prev_merge
    Rank elements: Model
    Weight: Block = 1, Expression = 1, Toggle = 1, Fsm = 1, Assertion = 1, Covergroup = 1
    Target Cumulative Grade(%): 100
    Max Number of Runs: 2

    Ranking of coverage runs
    ============================================================================

    Cumulative covered(%): 104869/140653 (74.56%)
    Number of Optimized Runs: 2

    Run ***. Self Contrib. Run
    Id Grade(%) Grade(%) # items Name
    ============================================================================
    Optimized Runs:
    1 74.42% 74.42% 104667 first_file
    2 74.56% 54.52% 202 second_file

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • StephenH
    StephenH over 5 years ago in reply to aefody

    Hi Avidan.

    When you merge a set of tests, all information about which test hit which bins is discarded, leaving only a single merged "test" that cannot be used for ranking the original tests. If you want to rank the original tests you must preserve the original test coverage files, at least until the ranking has been performed.

    In your example you are just ranking the two merged files against each other, which is almost certainly not what you intended.

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • aefody
    aefody over 5 years ago in reply to StephenH

    That's what I thought. Thanks.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information