The majority of designers today use equivalence checking for netlist signoff prior to tapeout or ASIC vendor netlist handoff. As FPGA devices become larger and larger capable of tens of millions of gates, will equivalence checking become the norm rather than the exception?
From your prespective, what is preventing you today from running equivalence checking on your FPGA design. Is it, cost of tool ownership? flow issues/availability? Trusting your logic synthesis and place&route tool will not generate bad logic? Consequence of a bad FPGA due to logic bugs is not a big deal because you can always reprogram it? etc..
Please share your views with me on this subject. I have worked extensively in this formal verification area for FPGA in the past eight years.
What I am hearing is that this issue is all a matter of practical efficiency. The practice of debugging the device after programming is just getting too big of a turnaround loop to be effective. Equivalence checking at least eliminates the possibility of tool of script related failure prior to investing the energy in lab debug time. It will be very interesting to see what the FPGA users on this forum have to say...