While working on a core physical design, do the I/O delays in the SDC file necessary to be modelled wrt a virtual clock to depict the top level clock.
If i have a heirarchial clock in the design which in the top level chip would connect to the top level clock then can the I/O delays be modelled wrt the heirarchial clock which has physical existance in the core level design.
Conceptually, a virtual clock is any clock that does not have sinks within the block you're working on, so when you're seeking to model IO delays relative to a top level clock that is not present in the block a virtual clock is a great way to model this. If instead, the clock is both at the top level -and- has sinks within the block you're working on you can define your IO delays relative to the clock and it would *not* be virtual. However, in this second scenario it is sometimes advantageous to still model IO delays relative to a virtual representation of the clock because it gives you the flexibility of defining what the virtual clock latency is with a single statement in your SDCs, whereas if you choose to model IO delays with real clocks (ie, non-virtual) the IO clock latency is determined by the insertion delay of the clock tree is as observed within the block. Optionally, you can include the source latency in the IO delay values, but then your IO timings are locked to a pre-determined latency value which is hard to adjust later since it requires updated each and every IO delay value.Hope this helps,Bob