I'm trying to simulate a large mixed-signal post-layout extracted (i.e. flat) netlist in ultrasim for functional/timing verification. Before, when I was running the simulation from a schematic netlist, these ultrasim accuracy settings were sufficient:
usim_opt sim_mode=ms speed=5
But now these setting give very strange behavior due to a partitioning problem. The bias node of my CML logic circuits wanders all over the place, and so does the CML amplitude. You can see this in the plot.
I believe this is due to different partitions coupling to this node. Increasing the simulator accuracy gets rid of the artifact, but can have a huge impact on simulation speed. These are the options I have identified so far that get rid of the bias drift:
1) Set global analog=3 or analog=4 to force more conservative partitioning. Increases simulation time by ~10X, which is too much.
2) Parse the netlist, identify all components that touch the CML bias node, and set sim_mode=a for these components only. This forces everything that touches the bias node to be in the analog partition. Transient simulation time is only slightly longer than with default settings above. But initialization time increases by ~8 hours processing the ~6000 different usim_opt statements. The log file says "SFE Parser" durring this time.
3) Declare the bias node as a voltage regulator output with a usim_vr statement. Simulation time is as fast as option 2 without the long initialization.
Has anyone used a usim_vr statement like this before? Do you know what it does? Is it a Bad Idea to use it like this?
I have similar problems.
The usim_vr command/statement is a piece of ***.
Partitioning is completely illogical.
.usim_opt elemcut_file=1 nodecut_file = 1*ultrasim: .usim_report partition type=size
In reply to martinmmuc: