• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Community Forums
  2. RF Design
  3. Loadpull Compression point impedance using xdb analysis...

Stats

  • Locked Locked
  • Replies 5
  • Subscribers 65
  • Views 8855
  • Members are here 0
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Loadpull Compression point impedance using xdb analysis in Harmonic Balance

SkkyLee
SkkyLee over 1 year ago

Hello everybody, now I am using harmonic balance analysis to determine the optimum load impedance of Power Amplifier, In harmonic balance analysis panel, right click the compression

Then after the netlist and run, in the direct plot->main form, can see the xdb options, choosing it and can plot the compression point contour, the result shown below

From this plot it's can be seen that when the load impedance Zd=7.44+j4.78, the output compression 1dB Power is 28.2618dBm

But when I directly set the load port impedance equal to 7.44+j4.78, and then sweep pin from -20 to 30dBm, to see the output compression 1dB Power, the result is hugely different, as the graph shown below, this time the simulated result is 30.416dBm

I don't know why, can anybody gives me a help, thank you very much!!

  • Cancel
Parents
  • David Webster
    David Webster over 1 year ago

    Hi SkkyLee,

    I'm just curious, but what does the Pout vs. Pin plot look like if you were to pick a purely real value of 10 for the load impedance?

    David Webster

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • SkkyLee
    SkkyLee over 1 year ago in reply to David Webster

    This is the interesting phenomenon, when the real part impedance is bigger than the optimum load's real part, the xdb loadpull contour is fit to compression point sweep result very well, when the load impedance is 10Ω(it's bigger than the real part of the optimum load which is about 7Ω), the OP1dB is 30.39dBm, which fit to the xdb contour result.

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
  • ShawnLogan
    ShawnLogan over 1 year ago in reply to SkkyLee

    Dear SkkyLee,

    SkkyLee said:
    But I don't perform transient analysis, the result 30.416dBm OP1dB was gotten when I perform hb analysis's sweep function(as screenshot shown below), so I want to confirm that when perform pss/hb simulation's sweep function (for my instance, pin sweep from -20 to 30dBm), a set of transient analyses are executed?

    I now understand that the power transfer function plot is a result of an HB sweep of the input power and not a set of transient analyses. Thank you for this clarification SkkyLee.

    SkkyLee said:
    (But I don't know how to verify this assumption, because I don't understand what the small-signal transfer function means exactly, does it mean the Power Amplifier's gain when the input power is very small?(Also my teacher's advice, he says the small-signal conception which used in the sp/ac analysis, the power level is always below -50dBm, but my PA simulation, the minimum input power is -20dBm, I don't know can this power level be called "small-signal"). And I don't know why the slope value is set as 1.05 the 2nd time(as the screenshot shown below), because of the "least squares algorithm"?

    To determine the X dB compression point, one must know the small-signal gain of the amplifier. The small-signal gain is the gain when the output shows essentially no harmonic distortion. The actual input power range that can be used to measure the small-signal gain is totally dependent on the gain characteristics of the amplifier under study. There is not a single range of input power levels that can be used to reliably provide a good estimate of the small-signal gain. In your plot of input to output power, I found that the slope and intercept used to compute the 1 dB compression point has a slope of 1.00 and an intercept of 12.166 dBm if one uses a range of input power levels between -20 dBm and -13 dBm . This produces your 1 dB compression point at an output power of 30.39 dBm.

    However, if I use a smaller range of input powers to compute the linear fit (-20 dBm to -19 dBm), the slope and intercept are 1.05 and 12.984 dBm. When I used this curve to estimate the 1 dB compression point output power, I obtain 28.497 dBm. Hence, my hypothesis is that the small-signal gain of your amplifier in the sweep analysis is different than the small-signal gain of your amplifier in the Xdb analysis.

    Unless you know the distortion characteristics of your amplifier output signal as a function of ihe input power. I think you want to use the smallest range of input power possible. I do not know how Cadence's tool chooses the range of input powers to use in its curve fit to your sweep. It may use the gain of "1.0" you provided in the form:

    But, perhaps, you might try reducing the input power sweep to start at something much less than -20 dBm to see if the 1 dB  compression point output power is changed from 30.39 dBm.

    SkkyLee said:
    This is the interesting phenomenon, when the real part impedance is bigger than the optimum load's real part, the xdb loadpull contour is fit to compression point sweep result very well, when the load impedance is 10Ω(it's bigger than the real part of the optimum load which is about 7Ω), the OP1dB is 30.39dBm, which fit to the xdb contour

    One of the reasons there is a discrepancy between your two values of 1 dB compresion point output power is the peaking in the power gain curve. This makes the 1 dB compression point output power very sensitive to the small-signal gain used in its estimate. When you change the load impedance from its matched value to use a real part of 10 ohms, it is likely the peaking in the power gain curve is no longer present. Intuitively, with the larger output impedance real part, my guess is the output current of your amplifier is less and hence it does not show the non-linear peaking in gain with input power. Therefore, if the output impedance changes the relationship between input and output power to eliminate the peak, I would expect the 1 dB compression power to be far less sensitive to the small-signal gain value and , as a result, show consistent results in your Xdb and HB sweep analyses.

    Shawn

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Reply
  • ShawnLogan
    ShawnLogan over 1 year ago in reply to SkkyLee

    Dear SkkyLee,

    SkkyLee said:
    But I don't perform transient analysis, the result 30.416dBm OP1dB was gotten when I perform hb analysis's sweep function(as screenshot shown below), so I want to confirm that when perform pss/hb simulation's sweep function (for my instance, pin sweep from -20 to 30dBm), a set of transient analyses are executed?

    I now understand that the power transfer function plot is a result of an HB sweep of the input power and not a set of transient analyses. Thank you for this clarification SkkyLee.

    SkkyLee said:
    (But I don't know how to verify this assumption, because I don't understand what the small-signal transfer function means exactly, does it mean the Power Amplifier's gain when the input power is very small?(Also my teacher's advice, he says the small-signal conception which used in the sp/ac analysis, the power level is always below -50dBm, but my PA simulation, the minimum input power is -20dBm, I don't know can this power level be called "small-signal"). And I don't know why the slope value is set as 1.05 the 2nd time(as the screenshot shown below), because of the "least squares algorithm"?

    To determine the X dB compression point, one must know the small-signal gain of the amplifier. The small-signal gain is the gain when the output shows essentially no harmonic distortion. The actual input power range that can be used to measure the small-signal gain is totally dependent on the gain characteristics of the amplifier under study. There is not a single range of input power levels that can be used to reliably provide a good estimate of the small-signal gain. In your plot of input to output power, I found that the slope and intercept used to compute the 1 dB compression point has a slope of 1.00 and an intercept of 12.166 dBm if one uses a range of input power levels between -20 dBm and -13 dBm . This produces your 1 dB compression point at an output power of 30.39 dBm.

    However, if I use a smaller range of input powers to compute the linear fit (-20 dBm to -19 dBm), the slope and intercept are 1.05 and 12.984 dBm. When I used this curve to estimate the 1 dB compression point output power, I obtain 28.497 dBm. Hence, my hypothesis is that the small-signal gain of your amplifier in the sweep analysis is different than the small-signal gain of your amplifier in the Xdb analysis.

    Unless you know the distortion characteristics of your amplifier output signal as a function of ihe input power. I think you want to use the smallest range of input power possible. I do not know how Cadence's tool chooses the range of input powers to use in its curve fit to your sweep. It may use the gain of "1.0" you provided in the form:

    But, perhaps, you might try reducing the input power sweep to start at something much less than -20 dBm to see if the 1 dB  compression point output power is changed from 30.39 dBm.

    SkkyLee said:
    This is the interesting phenomenon, when the real part impedance is bigger than the optimum load's real part, the xdb loadpull contour is fit to compression point sweep result very well, when the load impedance is 10Ω(it's bigger than the real part of the optimum load which is about 7Ω), the OP1dB is 30.39dBm, which fit to the xdb contour

    One of the reasons there is a discrepancy between your two values of 1 dB compresion point output power is the peaking in the power gain curve. This makes the 1 dB compression point output power very sensitive to the small-signal gain used in its estimate. When you change the load impedance from its matched value to use a real part of 10 ohms, it is likely the peaking in the power gain curve is no longer present. Intuitively, with the larger output impedance real part, my guess is the output current of your amplifier is less and hence it does not show the non-linear peaking in gain with input power. Therefore, if the output impedance changes the relationship between input and output power to eliminate the peak, I would expect the 1 dB compression power to be far less sensitive to the small-signal gain value and , as a result, show consistent results in your Xdb and HB sweep analyses.

    Shawn

    • Cancel
    • Vote Up 0 Vote Down
    • Cancel
Children
No Data

Community Guidelines

The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information