• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Functional Verification
  3. Directed vs Random Testing

Stats

  • Locked Locked
  • Replies 7
  • Subscribers 68
  • Views 14453
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Directed vs Random Testing

FormerMember
FormerMember over 17 years ago

Hi

I am writing a paper looking at "myths" in functional verification. By "myth" I mean the types of things which people take as accepted truth even though there may not be much evidence to back them up. So, is the death of directed testing one such myth?

My opinion is:
- random has replaced directed as the preferred test methodology at block level (and if you want a directed test you do that via your random test bench)
- but at chip level there is still a lot of directed mainly for many reasons - the main ones being more legacy of test benches at chip level and legacy of thought (i.e. we must see the chip do this before we ship), you want to see specific integration scenarios (although random + coverage could do that too), and because you are often doing HW + SW coverification where directed is more usual

I'm looking for an active discussion plus references to good articles or papers on this topic please

Thanks

Mike Bartley

  • Cancel
Parents
  • stelix
    stelix over 17 years ago

     Mike,

     Great topic. Here are a couple of comments:

     1) Has random replaced directed?

    "All test-cases are born equal, as long as they find bugs" :-) 

    Finding bugs is decoupled from the way test-cases are born. The problem with directed testing is that it forces the verifier into a cause-and-effect type of thinking. In other words, the problem is not as much that the user provides inadequate stimulus to the design; it is rather that they are looking for only certain types of errors per test-case.

    So I think it is a relative myth that random "is better" than directed. What really makes the difference is taking the time to write independent monitors, checkers and coverage collectors to be able to detect bugs and extract useful metrics. Once this milestone is reached, randomization can improve productivity but also burn a lot of simulation time.

    2) Chip level vs. block level

    When it comes to chip-level functionality, you offer 3 reasons why directed test is preferred:

        a) Legacy environments and thought
        b) Specific integration scenarios
        c) HW/SW co-verification

     These are all very true. I would also add:

        d) Organizational issues: Folks doing chip-level verification are usually different from the folks doing block-level verification. A lot of times they are software engineers or the bring-up team. They are used to their own tools and methodologies.

        e) "Bang for the buck": A GPU validation team will want to run the complete OpenGL or DirectX compliance suite before signing-off. Do they need more random testing on top? Maybe. Unless a chip-level coverage database has been implemented we simply don't know! So it is important to first invest in chip-level coverage metrics. In many cases, existing directed environments may be "good enough" if we can measure their impact. 

       e) Scope: Block-level verification is almost entirely about functionality.Chip level verification can include other components like, say, performance. Performance metrics may be very well defined as stimuli (e.g. specINT performance).

    I am more inclined to adopt constrained-random as a "top-off" type of technique for chip-level verification. However, both random and directed approaches can benefit from expanded use of coverage metrics and associated planning.

    Cheers,

    -Stelix. 

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Reply
  • stelix
    stelix over 17 years ago

     Mike,

     Great topic. Here are a couple of comments:

     1) Has random replaced directed?

    "All test-cases are born equal, as long as they find bugs" :-) 

    Finding bugs is decoupled from the way test-cases are born. The problem with directed testing is that it forces the verifier into a cause-and-effect type of thinking. In other words, the problem is not as much that the user provides inadequate stimulus to the design; it is rather that they are looking for only certain types of errors per test-case.

    So I think it is a relative myth that random "is better" than directed. What really makes the difference is taking the time to write independent monitors, checkers and coverage collectors to be able to detect bugs and extract useful metrics. Once this milestone is reached, randomization can improve productivity but also burn a lot of simulation time.

    2) Chip level vs. block level

    When it comes to chip-level functionality, you offer 3 reasons why directed test is preferred:

        a) Legacy environments and thought
        b) Specific integration scenarios
        c) HW/SW co-verification

     These are all very true. I would also add:

        d) Organizational issues: Folks doing chip-level verification are usually different from the folks doing block-level verification. A lot of times they are software engineers or the bring-up team. They are used to their own tools and methodologies.

        e) "Bang for the buck": A GPU validation team will want to run the complete OpenGL or DirectX compliance suite before signing-off. Do they need more random testing on top? Maybe. Unless a chip-level coverage database has been implemented we simply don't know! So it is important to first invest in chip-level coverage metrics. In many cases, existing directed environments may be "good enough" if we can measure their impact. 

       e) Scope: Block-level verification is almost entirely about functionality.Chip level verification can include other components like, say, performance. Performance metrics may be very well defined as stimuli (e.g. specINT performance).

    I am more inclined to adopt constrained-random as a "top-off" type of technique for chip-level verification. However, both random and directed approaches can benefit from expanded use of coverage metrics and associated planning.

    Cheers,

    -Stelix. 

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Children
No Data

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information