I am simulating a Nyquist ADC by giving a slow varying input ramp, collecting the sampled output code (and reconstructed by a verilogA model) and then plotting it versus the input signal (waveVsWave calculator function), hence getting the transfer characteristics of the ADC.
Now I would like to remove the offset and gain error, in order to take the difference between this corrected output and input, to observe the DNL.
Is there any built-in function to do that in the calculator or simple ocean script?
For the offset subtraction would be quite trivial, just subtract to the output its value at Vin=0.
But to find the gain error correction factor "gec" such that Vout*gec is the output characterstics without gain error, the proper Vin points must be chosen such that Vin(end) is the Vin corresponding to the last output code step + LSB/2 and Vin(start) corresponding to the first output code step -LSB/2.
To better explain, see example plot of an output characteristics of the ADC: the gec would be ideally calculated by [Vin(M4)-Vin(M5)]/[Vout(M4)-Vout(M5)] with reference to the markers M4,M5,M6,M7, such that Vin(M4)-Vin(M6)=LSB/2 and Vin(M4)-Vin(M7)=-LSB/2.
M5 and M4 are on the minimum and maximum ADC output code respectively, therefore the quantization error is diverging on their outer sides
Therefore finding a way to automatically extract the location of these M4 and M5 points is what I am looking for, without having to manually export data to matlab or any other program to do it manually for each corner/design parameter.
Note that depending on the corner the ADC gain error could be different, and therefore when I am sweeping a fixed range for the input voltage, in principle it is not possible to just use INTERCEPT/SLOPE functions of excel for example, because the regions where quantization noise diverges would start from different input voltages, but those functions would try to minimize the error thorugh all the input range. So such a method would require also to extract beforehand the region of the characteristics where quantization noise is within 0.5*LSB.
Any suggestion, or idea for different approaches, in case similar automatic functions are not available, would be highly appreciated.
virtuoso version ICADVM20.1
NewScreenName said:But to find the gain error correction factor "gec" such that Vout*gec is the output characterstics without gain error, the proper Vin points must be chosen such that Vin(end) is the Vin corresponding to the last output code step + LSB/2 and Vin(start) corresponding to the first output code step -LSB/2.
I have an alternate approach that you might want to consider in order to compute the gain error between the output values of your A/D and the input samples.
First of all, from my understanding of your computation of the gain error, you are not using the entire output and input characteristics to determine the gain error - you are only using the endpoints. Is that really what you want? For example, if your ADC has some non-linearity, then the average gain error you are computing will not correspond to the gain error at each output code. Would it not be better to use the entire set of output digital words to compute the average gain error? Alternately, might you not be more interested in computing the integral non-lineairity (INL)?
In any case, to answer your question, suppose instead of plotting the output and input values against one another and attempting to locate the respective values for your "endpoint" gain calculation, suppose you consider two calculations - one using the input samples of the ramp at the time when you apply the sample pulse (i.e., your ADC clock) and the second the set of corresponding ADC output digital words converted back to an analog voltage by your verilog-A model. Hence, you have your input samples vs time (N samples) and the output samples versus time (N samples)
The algorithm follows:
1. Set the x-axis of your input samples and output samples to either time or the step number (1,,,N).
2. Compute the best line fit to each data characteristic separately to form the sets of slopes and intercepts of (slope_in,intercept_in) and (slope_out,intercept_out).
3. Compute the average gain error as (slope_out)/(slope_in).
Figure 1 details an example computation for a 4 bit ADC and shows how the gain error the algorithm produces normalizes the output (ADC outputs) to the range of the input sample. I included the Microsoft Excel workbook with the example should you want to experiment with a set of values from your simulation.
To automate this in Cadence, you may use the function written by Mr. Andrew Beckett abBestFit() or simply his function abBestFitCoeffs(). The later can be used to find the slopes of your two data sets. The function is available by searching the Forums, or a version is in the Forum post discussing a similar question to yours at:
I hope I understood your need and this alternative simplifies your problem.
I had to split in two my reply as for some reason was blocked as spam, here the following:
ShawnLogan said:The algorithm follows:
This procedure does indeed what I wanted to do, my goal was though, to actually automate the process (suppose I have several corners and/or design points, it might get time consuming and tedious to repeat this for each of them, maybe multiple times, keeping everything in cadence would allow me to get a ready-made result at the end of each simulation)
ShawnLogan said:To automate this in Cadence, you may use the function written by Mr. Andrew Beckett abBestFit() or simply his function abBestFitCoeffs()
This is exactly something I was looking for, unfortunately only abBestFit() seems to work fine, while abBestFitCoeffs() on the same waveform (output data set) returns error:
ERROR (VIVA-3002):expression evaluation failed: val is not legal.ERROR (VIVA-3002):expression evaluation failed: abBestFitCoeffs(abBestFit(sample(Output_Characteristics -1.39728 0.3 "linear" half_Vlsb )))("plus" 0 t nil ("*Error* plus: can't handle (0.0 + \"model files names")"))
However I suppose this question should be addressed in the original post where this function is discussed, in the meantime I can extract those coefficients by looking at the intercept and slope of the waveform returned by abBestFit().
Now, one of the main points that prevent me from automating this, and maybe now that abBestFit() is available is the only open one, is the fact the gain error saturate the output for an unknown number of inputs at the lowest and highest ends of the range (see the example picture of a portion of Vin and Vout in time: output is clamped at the minimum possible output of the ADC becasue of the gain erorr, until the input magnitude is a bit smaller). This changes the points at which I must clip the output waveform before I can apply abBestFit() on it, making them dependent on corners.
Is there a way in which I could automatically detect at which x-axis value a waveform stops being constant x1 (about 600ns in my example picture), and starts being constant again, x2? This way I could use those values in a abBestFit(clip(output x1 x2)) to get the correct best fit line.