As of July 1, 2021, Google will discontinue the RSS-to-email subscriptions service.
Hence, the email alerts will be impacted while we explore other options. Please stay tuned for further communication from us.
Get email delivery of the Cadence blog featured here
These days, DRC rule deck availability for the market tools is not a major issue for customers designing on advanced nodes. All EDA vendors work closely with the foundries to facilitate the enablement. The bigger problem is that customers cannot get overnight runtime for full chip SoCs. At best, to achieve overnight runtime, some advanced node customers run a DRC job with split decks, usually in about four pieces at 16/14nm. This results in four times the cost of a single run, because each split deck run requires software licenses, hardware, memory, space, and power. Nothing comes free!
That is why we at Cadence started working on the 3rd generation physical verification tool four years ago which would give customers overnight runtime. Few months ago, we introduced Pegasus, the biggest breakthrough in SoC physical verification in more than 20 years.
Pegasus brings to market several significant innovations; I will briefly cover the three most significant technologies.
Stream Processing is not new to computer science but it is the first time being implemented in a DRC tool. We have taken the design and converted it into an input stream where Pegasus starts processing as soon as it reads the data. There is no wait for reading the data into disk like the second generation tools in the market today. Pegasus has done away with the concept of a database. Any customer who transitioned from 28nm to 16/14nm would agree that the time taken to read the data to disk was longer compared to at the previous node. In addition, customer would also note that second generation tools require high memory master machines to launch the jobs, Pegasus can use a small machine and launch many jobs for full chip runs.
The second key technology introduced is the Data Flow Architecture in Pegasus. Again, data flow architecture is not new to computer science, but it is the first time in a DRC tool. We have taken the rule deck and mapped the rules to the underlying computer hardware data flow architecture. With data flow architecture implemented Pegasus processes the rules and operations per rule concurrently. Second generation tools can run a few rules in parallel but the operations in the rules are processed sequentially. That is why we see that even though 80-85% of the rule checks complete, the remaining 10% - 15% of the rule checks take forever and impact the overall run time. This problem is getting worse and worse at each advanced nodes with the second generation DRC tools.
Now, with Pegasus we have taken the design and converted into an input stream, and mapped the rule deck to the underlying hardware and built this on a massively parallel pipelined architecture where we can run a DRC job on hundreds of CPUs and deliver near-liner scalability – something which is impossible for second generation DRC tools. We run very large SoC with Pegasus on 1000 CPUs and got near-linear scalability. This is a first time for a DRC tool to scale to 1000 CPUs and deliver near linear scalability. Several customers have told us they can run top level full chips early in the design flow.
Pegasus is the first DRC tool which is Cloud Ready and to deliver elastic hardware utilization. Pegasus has a safe mechanism to run on the cloud and has built-in intelligence to use the CPU resources efficiently. Pegasus can ramp up or down the jobs based on the CPU cores efficiency, and Pegasus does not need to acquire all the CPUs to begin a job. Currently, a common problem with second generation tools is that they wait to acquire all the CPUs before starting a job, and will not release the CPUs if another tool needs to run. There are many more capabilities which I will outline under a machine learning topic blog.
More details on the Pegasus product page.