Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Metric-driven verification provides a great deal of valuable coverage data, but where are you going to store it all? Merging coverage data from hundreds or thousands of parallel simulation runs can pose a huge bottleneck. One way to avoid that bottleneck is to write to a memory-resident relational database, according to James Roberts, a verification engineer at Oracle who works with Sparc processors.
Roberts spoke at a DVClub Silicon Valley meeting August 17, 2011, along with his colleague Greg Smith, who described a software-inspired technique that can predict bug arrival rates and verification closure. I wrote a blog post about Smith's talk last week. Sponsored by Cadence, ARM, SpringSoft, and Silicon Elite, DVClub holds lunch presentations for verification engineers in 10 cities worldwide.
Since Oracle is famous for its database technology, it is not too surprising that Oracle hardware verification engineers would use a relational database to handle coverage collection. But you don't have to run out and buy a commercial Oracle database to use the technique that Roberts outlined. It's based on MySQL, an open-source database owned by Oracle that is freely available to anyone. And the methodology, according to Roberts, can substantially reduce your total simulation time, thus reducing simulation CPU farm utilization and cost.
Waiting in Line
In a typical simulation environment - at least at Oracle - thousands of simulations run in parallel in a compute farm, and collect coverage data. Because engineers want to know what was covered or not covered by all the simulations together, this coverage data must be merged into some sort of repository. In a typical file-based environment, each simulation writes out a log of its "hit" coverage, and each log is "diff-and-merged" with the repository.
What's wrong with that? In a file-based flow, Roberts said, all the coverage data is stored in one huge file. Yet a "middle of the road design" can exceed 200,000 coverage objects and 300 Mbytes. Diff-and-merge can take over 10 minutes for a single simulation. Only one simulation can hold the file lock at one time. The file is completely rewritten after every merge, resulting in "non-stop disk activity." Result: it might take one hour for six simulations to merge coverage data, at a time when thousands of simulations are running.
The solution, Roberts said, is to use the MySQL database as the repository rather than a flat file. Instead of writing coverage logs, simulations open a TCP socket and make SQL (Structured Query Language) queries to the database. Under this scheme merges can be parallel, and individual simulations no longer require a file lock on the entire repository. Only a subset of the coverage data is diff-and-merged, not the entire file.
Still Some Bottlenecks
The MySQL approach results in a 3X speedup over a file-based flow, according to Roberts. But there were still some problems when this was first implemented. "You still have thousands of simulations still trying to funnel onto a single disk," he said. Further, the database server was bumping into a wall at around 1,600 statements/second, and handling 20 simulations in parallel was not enough, given that 100 might be waiting. Throttling down simulations so the server can keep up would "constitute failure," Roberts said.
The solution was to store the main coverage database entirely in RAM. This is now feasible - a coverage database might be around 20 Gbytes, and servers these days typically have 64 Gbytes of RAM. The disk is "demoted" to backup storage only. Each simulation is a client, and simulations connect to the server via TCP sockets. Coverage data is transmitted directly to the server with no disk involved.
Under this scheme, Roberts said, client access time is reduced from 30 minutes to two seconds - a 900X speedup. With the file-based approach, he noted, it was not atypical for a 2-hour simulation to require 30 minutes of merge time. With virtually zero merge time, a 25% speedup in the simulation farm is possible. "There's no longer a bottleneck in coverage," he said. "The bottleneck is how fast the simulation runs and how many machines you have."
For more information, you can view Roberts' presentation on line.