Today, it is essential to put into place a strong methodology to identify sources of yield loss during manufacturing. One widely accepted method involves diagnosing a representative sample of device failures during manufacturing test. The failing results are aggregated, analyzed and a Pareto is created showing the highest frequency of failures from one or more of: cell, instance, net, test pattern, metal layer and layout topology. i.e. nets on M4 with more than five vias
There are many considerations when you deploy diagnostics including:
With sequential logic being used in on-chip compression, this infers temporal data must be used by the diagnostic tool. When a failure is observed in the signature of the compactor the diagnostic tool needs to 'unrolled' the failing bit(s). It analyzes the data back wards in time to find the offending input pattern. This task can involve examining hundreds or thousands of clock cycles to get the actual scan bits that detected the failure. This issue can be even further complicated when multiple failures exist in the scan chains and or in the logic clouds between scan chains.
What can one do to assure the best diagnostics results?
Good points but the data size of scan patterns is huge.