• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Custom IC Design
  3. Montecarlo simulation on matched devices

Stats

  • Locked Locked
  • Replies 12
  • Subscribers 126
  • Views 9694
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Montecarlo simulation on matched devices

Senan
Senan over 2 years ago

Hello,

Suppose the verification of simple MOS current mirror using Montecarlo (MC) simulation.  Now the MC module used in the simulation have the statistical process variation, where at some samples one of the transistors has the maximum threshold voltage and the other minimum. Hence a maximum deviation between the mirror outputs is expected at this sample and can be monitored in the simulation.

However, in practical implementation, these transistors are matched, suppose matched very well with layout matching techniques, so such a process variation should not be effective as MC tell.

In this case I wonder if the MC from post layout simulation is already considering the matched transistors, and how it knows those transistors are matched.

Thank you in advance for your help

Regards 

  • Cancel
  • illaoi
    illaoi over 2 years ago

    -Based on my understanding, when you click on process, all your components (FETs, Passives) will move to that specific process, say Fast-Fast, so your sample example does not seem realistic.

    -Post layout has nothing to do with monte-carlo, unless in their definition they consider (mu,sigma) for layout dependent effect such as poly-effect, Metal Boundary Effect, SA, ... which I don't think that is the case, and obviously interconnects have no statistical variation.

    -I believe Andrew in a post has said what you get from Monte-Carlo is with the assumption of that you have done an almost perfect work for layout matching, so if someone has not done so, then most likely would see worst results after fab. Also, I believe in your case, there is no correlation defined between FETs, so if you place them way far or other way around, to simulator, they would be the same.

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • Senan
    Senan over 2 years ago in reply to illaoi

    I am also looking to the reply from Andrew :)

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • Andrew Beckett
    Andrew Beckett over 2 years ago in reply to Senan

    In essence, the "mismatch" variation with Monte Carlo is really modelling the residual random local variation of each device. It is not modelling systematic matching effects, but the residual random variation.

    Now of course, the sensitivity of a circuit to that local residual random variation will be a good proxy for the importance of good matching of those devices, but it's not generally correct to start setting correlation coefficients between the statistical parameters of well-matched devices, because the models are not usually modelled that way. There isn't a tool which uses the physical arrangement to figure out the impact of good or poor matching strategies; it's really not Monte Carlo that does this.

    In other words, even if you have arranged your layout to cancel out gradients and minimise mismatch effects by ensuring close proximity, alignment and orientation, there will still random variation and it's this that is generally what your Monte Carlo models are modelling.

    Andrew

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • Senan
    Senan over 2 years ago in reply to Andrew Beckett

    Dear Andrew.

    Thank you very much for your nice explanation, I do appreciate your efforts.

    As you knidly stated "here isn't a tool which uses the physical arrangement to figure out the impact of good or poor matching strategies; it's really not Monte Carlo that does this."

    Which if I understood it correct, I will consider a circuit with two different layout implementation, the first one with  high matching degree , the other one with no matching at all, just simple pick and place.

    I will consider DC parameter only for the post layout simulation, like the input offset voltage of op-amp, because the AC charasterictics are sensitive to parasatics and will change by anyway from layout form  to another.

    Should I now expect to have the same offset voltage MC distribution of both post layout simulation?

    Thank you once again

    Best Regards

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • Andrew Beckett
    Andrew Beckett over 2 years ago in reply to Senan
    Senan said:
    Should I now expect to have the same offset voltage MC distribution of both post layout simulation?

    Yes, unless:

    1. the different layout of the instances causes some difference in the layout-dependent-effects (e.g. LOD, well-proximity effect, stress parameters etc)
    2. The parasitic resistance causes a change in the offset (there could be different voltages in the circuit because of the IR drop along the tracks, for example)

    Andrew

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • ShawnLogan
    ShawnLogan over 2 years ago in reply to Andrew Beckett

    Dear Senan,

    Please allow me to add to Andrew's comment regarding your question about the degree to which the distributions of offset voltages of two different layouts of an op-amp are similar.

    Since the amount of random variation in the vgs of a device is dependent on its absolute value, any difference in the current densities of the P and N rail input devices of the op-amp will result in an offset voltage. Hence, if there is any systematic difference in their current densities for the two layouts, a Monte-Carlo simulation result will also show a difference in the offset distributions.

    Basically, the approach I might suggest is the following:

    1. Perform schematic netlist based offset voltage simulations using your set of operating point, process, voltage, and temperature conditions to validate that the offset voltage variation is at a minimum (ideally zero).

    2. Perform post-layout netlist based offset voltage simulations using the same set of operating point, process, voltage, and temperature conditions to compare the offset voltage variation sufficiently close to that using your schematic based netlist. Note, to identify layout features that are responsible for systematic effects in the offset voltage, you may need to perform simulations using capacitance only and full RC extracted views of your layout. A tool such as Paragon X can be extremely valuable in determining the root cause of any systematic layout effects on offset.

    3. Only after you have minimized the post-layout netlist based offset voltage, perform a set of Monte-Carlo simulations to assess the resulting random variation of the offset. Consider using sets of operating point, process, voltage, and temperature conditions that will produce maximum gate-source and minimum gate-source voltage conditions to estimate the range in random variation. Use a sufficient number of simulations to verify the confidence interval of the predicted standard deviations and validate that the distributions are representative of a gaussian distribution. As an example, Figure 1 illustrates the variation in standard deviation for four time based parameters with the number of simulations chosen for a Monte-Carlo analysis.

    Shawn

    Figure 1

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • Senan
    Senan over 2 years ago in reply to Andrew Beckett

    Dear Andrew,

    Thank you very much for your confirmation, you made it very clear for me 

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • Senan
    Senan over 2 years ago in reply to ShawnLogan

    Dear Shawn,

    Thanks a lot for your effort in providing the useful details.

    Every thing is clear, but I want to use this chance to address you two questions in my mind, what is the number of samples of MC required for 95% confidence, may be there is is mathematical expression between both?

    The second quesion, I saw two papers in which the authors nested the MC run over process corner, for example running  MC around the slow process or fast process corners. First of all, the technology I use doesn't allow to do that but I am curious to see. Because in my opinion the MC is the process profile with maximum and minimum deviation from the typical mean process and I am not sure how they did it like this way, nor if it is realistic to consider?

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • Andrew Beckett
    Andrew Beckett over 2 years ago in reply to Senan
    Senan said:
    Every thing is clear, but I want to use this chance to address you two questions in my mind, what is the number of samples of MC required for 95% confidence, may be there is is mathematical expression between both?

    That's complicated because it depends on the yield that you need and the distribution of the output expressions. You could pick the option on the Monte Carlo options form and specify that you want to verify the yield, then set that you want to use the basic auto-stop (the advanced auto-stop requires additional licenses and can be more efficient).

    Senan said:
    The second quesion, I saw two papers in which the authors nested the MC run over process corner, for example running  MC around the slow process or fast process corners.

    Some technologies provide distributions about a process corner - so that means that you can simulate the local (particularly "mismatch") variation around that corner. I've always had a conceptual challenge with this approach, but it does seem to be how some technologies organise their statistical models. It's always best to understand from the foundry themselves what their statistical modelling strategy is. Put another way, even if the process is centred in a particular place, there will still be die to die variation and device to device variation, so it is still a realistic thing to consider.

    Andrew

    • Cancel
    • Vote Up +1 Vote Down
    • Cancel
  • Senan
    Senan over 2 years ago in reply to Andrew Beckett

    Dear Andrew,

    Thank you once again for your help, the second answer is clear to me now after your explanation.

    For the first part with regards to the confidence level, I usually use the fixed number of MC sample, not the auto stop.

    I mostly target 3 Sigma and I do run MC with large number of samples, like 500 or 1000 some times. I always run it with Confidence level not set to value. After the MC finish and I got a yield like 100, I go to define the confidence level to 90 or 95 % and I notice drop in the estimated yield, but this drop becomes less when the MC has more number of samples. That is why I think I need to know the optimum number of fixed samples to reach certain sigma with certain confidence level.

    Thank you

    Regards

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
>

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information