• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Custom IC Design
  3. MC test dependent on pass/fail of previous MC test

Stats

  • Locked Locked
  • Replies 5
  • Subscribers 126
  • Views 2357
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

MC test dependent on pass/fail of previous MC test

TempViator
TempViator over 2 years ago

Hello.

I am trying to create a 2 step test simulation for a Test Screen for customer sample units.  I want to a) tell production how many units to expect to test to generate a certain number of good sample units, and b) tell field engineers what percentage of units might fail due to temperature changes in the field.  And I want to do screening only at room temperature.

The first task is to take as-built silicon and predict the yield percentage based on room temperature results.  It may be only 60% yield, so there will be a large fall-out in the set of MC runs. (That is, there will not be a small set of bad devices that could easily be dealt with individually.)

Then I want to take just the passing MC runs and re-simulate them using the MC parameters at several temperatures to get a prediction of the yield of devices that passed room temp sims, but failed at other temps.  I expect a high yield / low fallout due to temp effects in my particular case. 

I can do this with a lot of post-processing, but I am asking if there is a way to simulate only the passing MC runs of test 1 in test 2 - essentially ignoring any MC run that was already a fail at room temp.

If this is possible, it seems like it would take 2 tests.  I have tried just running the same sims at room and other temps in corners, and then looking at the results across corners for each MC run, but I still have to find the specific room temp failures and somehow account for them in the yield calculations - which is post processing I can do, but would like to avoid.

Suggestions?  Is these even possible?

Thank you.

  • Cancel
  • ShawnLogan
    ShawnLogan over 2 years ago

    Dear TempViator,

    After reading your Forum post a couple of times through, I think I understand the desired automation you are describing. A few itesm are not clear to me that might help determine if my thought is a feasible approach. For example, you note:

    "The first task is to take as-built silicon and predict the yield percentage based on room temperature results. "

    1. Is the pass/fall criteria of you first set of Monte-Carlo simulations (MC1) at room temperature determined by a single output? In other words, is the percentage of passing units defined by a single defined output? If not, how many outputs are used to determine your "pass/fail" device classification?

    2. Is the second Monte-Carlo simulation set (MC2), ideally to be performed only on those simulations that successfully pass your criteria from MC1, a transient simulation or some other type of simulation (i.e., DC)?

    If I assume the answers to [1] and [2] are a single output defines the pass/fail status of MC1 and MC2 is a transient simulation, let me pass (please excuse my pun!) the following along to you.

    1. Suppose your output variable used to define the pass/fail criteria threshold is named "circuit_param' and its pass/fail criteria threshold is "circuit_param_min". Hence, if circuit_param >= circuit_param_min, the device is classified as passing screen MC1 and should be re-simulated under MC2.

    2. For MC2, unlike for MC1 if it is also a transient simulation, set the simulation stop time to be a variable, say, "t_stop_sim2". Define t_stop_sim2 as a design variable with the conditional expression:

    t_stop_sim2 = if(VAR("circuit_param") >=circuit_param_min TSTOP tstop_fail)

    where TSTOP is the desired and non-zero simulation stop time and tstop_fail is a small simulation time interval (ps or ns).

    In this fashion, when you send "circuit_param" from MC1 to MC2 using calcVal() and it is less than the criteria for passing MC1 of circuit_param_min, the simulation time interval for that simulation of MC2 will be tstop_fail. Hence, the simulation will run for a very short time. (I avoid choosing tstop_fail as 0 since I want to evaluate an MC2 output that indicates it failed MC1.)

    3. Also define a design variable for MC2 that indicates if that MC1 iteration passed your criteria and define this variable as an output using a VAR() expression. For example, define a design variable in MC2 based on calcVal() from MC1 named "pass_mc1" as:

    pass_mc1 = if(VAR("circuit_param") >=circuit_param_min 1 0) (use calcVal() to get "circuit_param" from MC1)

    pass_mc2 = if(VAR("circuit_param") >=circuit_param_min 1 0) using result from MC2

    define outputs "pass_fail_mc1"  and "pass_fail_mc2" as:

    pass_fail_mc1 = VAR("pass_mc1")

    pass_fail_mc2 = VAR("pass_mc2")

    After your MC2 completes, you will have a listing of MC2 outputs that indicate each iteration that passed your criteria for MC1 and failed your criteria for MC1 as well as those that pass your criteria for MC2 and fail your criteria for MC2. Between these, I think you can determine the two screening yields you are interested using:

    Initial yield is defined by output pass_fail_mc1 for all MC2 iterations. MC2 yield is defined by MC2's  output pas_fail_mc2.

    Shawn

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • Andrew Beckett
    Andrew Beckett over 2 years ago in reply to ShawnLogan

    I'm not sure you particularly need to use step 3 that Shawn showed - the overall yield estimate on the Yield view combines the points in the tests (so if you have one test with nominal temperature and the second test performing a sweep of temperature) - it will only count as something yielding if the same point passes on all the tests/sweep points.

    You may still want to use Shawn's suggestion of a shorter stop time if you don't need to truly run the simulation - I was going to suggest something similar using $finish_current_analysis in a Verilog-A block (Shawn's suggestion is simpler).

    Andrew

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • TempViator
    TempViator over 2 years ago in reply to ShawnLogan

    Hi Shawn,

    Okay, I believe I understand your steps as outlined. Running short sims is better than running long (and unnecessary ones) at least.  I need to give this a try!

    I think the key to this task, as I has pictured it, is a nested structure where each MC run (group of params) goes through a series of Assembler tests before the next MC run (group of params) is applied.

    Let me answer your questions:

    1. The pass/fail criterion of MC1 in my case can be checked by a single parameter, although I have a range for the checked value (which is really 2 checks of the same parameter).  I could see a more general usage requiring several test parameters though.  For now I did not want to complicate things with that.

    2. The MC2 and MC1 are both Transient simulations -- the same sims, but just with more corners in MC2.

    Thanks, I will follow up after I try this out.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • TempViator
    TempViator over 2 years ago in reply to Andrew Beckett

    I want a passing rate of just the first MC1 test, but then a marginal (?) fail rate from MC2 test of just those sims that already passed MC1.  I think I need step 3, or something like it, to automatically extract that information.

    To both you and Shawn: I think all of this will end up giving me a yield number, but the "Yield" information in the Data Window (and the ability to get conditional histograms of MC2 fails or passes) will not be right unless I can actually skip over MC1 sims that already failed.

    Thanks for responding -- this has proven more esoteric than I expected!

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • ShawnLogan
    ShawnLogan over 2 years ago in reply to TempViator

    Dear TempViator,

    TempViator said:
    Running short sims is better than running long (and unnecessary ones) at least.

    I agree. The two reasons I thought it best to include a short simulation ( in lieu of setting it to 0) was to force spectre to create a results directory for each of the MC1 simulations to prevent a missing corner and to create a metric indicating that a MC1 corner did not pass the room temperature screen.

    TempViator said:
    1. The pass/fail criterion of MC1 in my case can be checked by a single parameter,

    Great - thank you for letting us know. As you noted, starting with the most straightforward case to validate the process makes sense.

    TempViator said:
    2. The MC2 and MC1 are both Transient simulations -- the same sims, but just with more corners in MC2.

    Excellent. This at least suggests the methodology might be useful.

    TempViator said:
    Thanks, I will follow up after I try this out.

    Good luck and thank you for the added information TempViator - it is useful!

    Shawn

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information