• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. Custom IC Design
  3. Understanding underlying statistical assumptions in Monte...

Stats

  • Locked Locked
  • Replies 7
  • Subscribers 126
  • Views 3286
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Understanding underlying statistical assumptions in Monte Carlo simulations

DomiHammerfall
DomiHammerfall 9 months ago

Dear Community

Suppose I want to design a 5:1 current mirror. There are four ways to set up the circuit/simulation:

a) Make the width of one transistor 5 times bigger.
b) Make 5 instances of an unit-sized transistor to create one big device
c) Use the device multiplier (m-factor) with nullmfactorcorrelation=no
d) Use the device multiplier (m-factor) with nullmfactorcorrelation=yes

I would expect that:
- (b) and (d) are identical because in both situations we have 5 separate, uncorrelated devices.
- (a) and (c) are identical because 5 fully correlated, unit-sized devices should behave the same as one big device.
- (b) and (d) yield a higher variance than (a) and (c) because we assume the five unit-sized devices to be five statistically independent mismatch sources.

Here is what 1000 MC samples (mismatch only, fully random sampling) look like in Spectre:

The plot shows the standard deviation of the output current (in absolute numbers) versus the device area.
Quick side node: Doing version (a) in the simulator gives a small systematic error since the widths are unequal.

Question 1:
In contrast to my expectation, (a) is identical to (b) and (d). This makes sense in reality; it's a question of the absolute device area after all, if I use one big device or many smaller devices should not matter. But shouldn't the variance be higher when I tell the simulator to explicitly assume that the five instances are uncorrelated? What is happening under the hood here?

Edit: This is correct. If you do the calculations carefully, (a), (b), and (d) are also the same theoretically (ignoring systematic errors in (a), of course).

Question 2:
Version (c) looks a bit off. If you make the same simulation with another mirroring ratio, e.g. 1:1 or 20:1 (same output current, i.e. the bias current is scaled accordingly), the standard deviation of version (c) is always the same. In fact, every design point is exactly the same. That does not seem right to me. An explanation of the simulator's behaviour would be appreciated here.

  • Cancel
  • Frank Wiedmann
    Frank Wiedmann 9 months ago

    Uncorrelated devices have a lower variance. They vary independently, also in different directions, so that the variations partly cancel each other. For 100% correlated devices, the variations always add up. This is probably also the reason why the mirror ratio makes no difference for your version (c): the relative variation is always the same for all values of m.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • DomiHammerfall
    DomiHammerfall 8 months ago in reply to Frank Wiedmann

    Hi

    I would like to bring up this topic again.
    First, thanks to Frank for his response.


    If you blindly apply the mismatch theory, meaning that you use the same Pelgrom coefficients in all settings, a big transistor has less variance than a set of smaller, uncorrelated devices connected in parallel with the same area.

    Edit: This is wrong. The outcome is the same in both scenarios.

    Clearly, that is not what is happening in the simulator. So what does Specter exactly do in each setting? How does it sample parameter variations in each scenario? If somebody could elaborate how it's done, that would be great.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • Andrew Beckett
    Andrew Beckett 8 months ago in reply to DomiHammerfall

    I didn't see either of the earlier posts in this thread (something is afoot with the email notifications not always being sent; hopefully that will get fixed when we upgrade to a newer version of the forum software; with a bit of luck that will also fix the sporadic image uploading issues that some are facing too).

    Please take a look at Recommended Spectre Monte Carlo modeling methodology (I think the same paper is on the Cadence support site somewhere too, but I knew where it was on the Designer's Guide site!). Note that whilst this talks about modelling of the effects of reducing mismatch by close proximity and use of correlation coefficients, in practice the "mismatch" part of the statistical models from most foundries are just modelling the residual local variation of a device and are not taking into account that you might or might not use good layout practice to minimise the differences. Put another way, the term "mismatch" in the model is slightly misleading - it's more about modelling global (die to die) or local (device to device) variation - trying to figure out appropriate correlation coefficients (which are on often meaningless parameters which have variation) is pretty hard and will not have been characterised by the foundries. Note too that the nullmfactorcorrelation doesn't always make sense (depends on the foundry) as sometimes they will have had subckt or other models which have already taken m-factor into account.

    The general principle of what Spectre does is that it just generates random values for each of the statistical parameters according to the specified distributions. There's a value picked randomly once per simulation point (using the "process" distribution), and then that is locally varied once per (subckt) instance (see the paper) (using the "mismatch" distribution). Those parameters are then used within the models to adjust various model parameters (exactly how is down to the foundry who produced the models).

    Andrew

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • DomiHammerfall
    DomiHammerfall 7 months ago in reply to Andrew Beckett

    Dear Andrew

    Thank you very much for your response. I carefully exanimated the linked whitepaper find it indeed helpful. I have two follow-up questions:

    1. The guide does not talk about device segmentation, but if I understood the paper correctly, that has no impact on how the simulator behaves. Spectre will always sample from the same distribution, regardless of whether I split my device into smaller instances. Correct or not?

    2. The paper then further states that "However, process experts are completely free to decide upon (measure) a ratio for their own process, and communicate the corresponding correlation coefficient for such matched devices to their design community." This is actually new for me. My understanding - so far - has been that local, random fluctuations are always assumed to be fully uncorrelated, no matter the situation. I have two PDKs installed and checked some matching reports of different devices (MOSFETs, CAPs, ...); I couldn't find a word about (recommended) correlation values. So as you stated in your answer, it is not characterized by the foundry. Therefore, the fully uncorrelated assumption is true?

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • Frank Wiedmann
    Frank Wiedmann 7 months ago in reply to DomiHammerfall

    My understanding is that foundries usually characterize the mismatch parameters with layouts that use best practices for good matching (small distance, same orientation, same environment). For layouts that don't take matching into account, the mismatch variation of the simulation models might be optimistic. For details, you will probably have to take a look at the documentation from the foundry or ask them.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • Andrew Beckett
    Andrew Beckett 7 months ago in reply to Frank Wiedmann

    Frank has said what I would have said for point 2. For point 1, spectre is indeed sampling from the same distribution (I'm not sure how it could do otherwise - there will be more samples if there are more instances, but the distribution remains the same) - but of course the impact on the device will depend on how the varied parameter is used in the models.

    Andrew

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • DomiHammerfall
    DomiHammerfall 7 months ago in reply to DomiHammerfall

    Dear Andrew, dear Frank

    Thanks for clarifying the simulator's behaviour and cleaning up some misunderstandings.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information