• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Functional Verification
  3. Overloading $error in SVA under NCSIM

Stats

  • Locked Locked
  • Replies 6
  • Subscribers 64
  • Views 16499
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Overloading $error in SVA under NCSIM

archive
archive over 18 years ago

Referring to recent thread on www.verificationguild.com, topic:

Assertion failures and pass/fail status


Adam asked:

-------
Any word from Cadence on the ability to substitute your own $error() routine ?

This definitely would be nice.
-----

I know that atleast one other vendor allows it easily. What about NC? Or is there any other easier way to keep a count of all SVA errors and declare PASS/FAIL at the end? How does URM handle this? This is a very common problem as many  RTL engineers write assertions and it is hard to ask them to keep following a standard action-block syntax. Also with IPs etc. it is nearly impossible to guarantee this.

So it will be really useful to have the SVA errors accounting to simulation PASS/FAIL.

Any advice on this will be appreciated.

Thanks
Ajeetha, CVC
www.noveldv.com


Originally posted in cdnusers.org by ajeetha
  • Cancel
  • archive
    archive over 18 years ago

    You can use the 'assert -summary' TCL command to get a summary of assertion errors.

    assert -summary ...Print summary report of assertion statistics.
    -byfailure..............Sort summary by number of failures.
    -byname.................Sort summary by property name.
    -final..................Defer summary report until the end of simulation.
    -redirect ....Print the summary output to the file called "".


    Originally posted in cdnusers.org by tpylant
    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • archive
    archive over 18 years ago

    Posted By tpylant on 3/06/2007 7:13 PM
    You can use the 'assert -summary' TCL command to get a summary of assertion errors.

    assert -summary ...Print summary report of assertion statistics.
    -byfailure..............Sort summary by number of failures.
    -byname.................Sort summary by property name.
    -final..................Defer summary report until the end of simulation.
    -redirect ....Print the summary output to the file called "".

    Thanks, but this would not be the easiest solutions for few reasons:

    1. I still have to figure out hard on how to get this to do an "error count" that's easily visible under testbench code. Imagine some thing like the dut_error() call in E/Specman. It is fantastic as it gets us a single point contact for all errors. I remember a team member integrating that to OVL errors long back.

    2. Typically enabling TCL means degrade in run time performance - not ideal for regressions. For instance I can't run simple: "ncelab my_top", ncsim my_top. I will require -access R (May be WC as well).

    Any better ideas? Given that Specman is now part of NC/CDN, I'm quite sure you guys can do an integration to dut_error like stuff. Verisity guys knew "what's verification" and they paid attention to every sensible verification requirement! Perhaps that's the start-up mindset!


    Thanks
    Ajeetha, CVC
    www.noveldv.com


    Originally posted in cdnusers.org by ajeetha
    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • archive
    archive over 18 years ago

    The way that we suggest in the URM is to use the included `DUT_ERROR macro. This macro displays an error message and increments an error counter. Depending on the severity level passed as an argument decides whether the simulation continues or stops.

    The DUT_ERROR macro and error handling tasks are contained in a urm_util package that is included with the IPCM install.

    Tim


    Originally posted in cdnusers.org by tpylant
    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • archive
    archive over 18 years ago

    Posted By tpylant on 3/07/2007 7:11 AM
    The way that we suggest in the URM is to use the included `DUT_ERROR macro. This macro displays an error message and increments an error counter. Depending on the severity level passed as an argument decides whether the simulation continues or stops.

    The DUT_ERROR macro and error handling tasks are contained in a urm_util package that is included with the IPCM install.

    Tim
    Tim,
         That goes back to the starting of this thread (in vguild) - not every RTL engineer is willing to add DUT_ERROR in action block! If so how do we handle that? That's why I suggested overloading $error.

    Ajeetha, CVC
    www.noveldv.com


    Originally posted in cdnusers.org by ajeetha
    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • archive
    archive over 18 years ago

    I don't understand the difference between calling $error and calling `DUT_ERROR:

    assert(error_cond) $error;

    vs.

    assert(error_cond) `DUT_ERROR;



    Tim


    Originally posted in cdnusers.org by tpylant
    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • archive
    archive over 18 years ago

    Posted By tpylant on 3/07/2007 7:43 AM
    I don't understand the difference between calling $error and calling `DUT_ERROR:

    assert(error_cond) $error;

    vs.

    assert(error_cond) `DUT_ERROR;



    Tim

    The difference is that SV LRM demands that $error is implicitly called when an assertion fails, so with:

      assert (error_cond);

    Tool should implicitly call $error.

    Regards
    Ajeetha, CVC
    www.noveldv.com


    Originally posted in cdnusers.org by ajeetha
    • Cancel
    • Vote Up 0 Vote Down
    • Cancel

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information