I'm in the process of weeding a regression test list. I have a coverage database from the full regression list and would like to diff it with the coverage database from the new reduced regression test list. If possible I would than like to trace back any buckets covered with the full list, but not with the partial list, into the original tests that covered them.
Is that possible using IMC? if not, is it possible to do from Specman itself?
(Note that we're not using vManager)
How did you generate the reduced test list? In theory the ranking in IMC should be "perfect" meaning you shouldn't lose any tests that add even one unique bin, so the reduced test list should still give exactly the same set of hit bins as the full regression.
IMC does have a Tcl command "report_unique" that lets you compare two coverage databases to see what bins are unique to each one. Search on report_unique in support.cadence.com or jump to this FAQ for details.
You would need a bit of scripting to figure out which of the full regression's tests were adding unique bins compared to the merged reduced regression database, but it could be done by simply iterating through each test in the full regression.
Our merging process seems to be two step: (1) coverage from tests is merged into two merged files (2) the two files are merged. Our ranking output shows something like below. Both first_file & second_file were apparently deleted in the process. Is there a way to rank the original tests? from the docs I believe not, as ranking either accepts the tests themselves, or you ask it to look at the previous merge, which in this case is useless. Correct?
Using resultant model of previous merge ... Coverage ranking options:============================================================================Rank options: -out rank_rpt.txt -use_prev_mergeRank elements: ModelWeight: Block = 1, Expression = 1, Toggle = 1, Fsm = 1, Assertion = 1, Covergroup = 1Target Cumulative Grade(%): 100Max Number of Runs: 2
Ranking of coverage runs============================================================================
Cumulative covered(%): 104869/140653 (74.56%)Number of Optimized Runs: 2
Run ***. Self Contrib. RunId Grade(%) Grade(%) # items Name ============================================================================Optimized Runs:1 74.42% 74.42% 104667 first_file2 74.56% 54.52% 202 second_file
When you merge a set of tests, all information about which test hit which bins is discarded, leaving only a single merged "test" that cannot be used for ranking the original tests. If you want to rank the original tests you must preserve the original test coverage files, at least until the ranking has been performed.
In your example you are just ranking the two merged files against each other, which is almost certainly not what you intended.
That's what I thought. Thanks.