Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
So who is Thad McCracken and why should you be interesting in reading this blog entry? Thad has been a Cadence Core Comp Senior Technical Leader focused on the Encounter Platform for 6 years(essentially a specialized Applications Engineer that bridges the gap between the field and R&D.) I sometimes refer to him as “The President and CEO of McCracken Labs” because he has been responsible for some very popular innovations in SoC-Encounter: Ostrich and Global Timing Debug. Here is a quick (less than 3 minute) demo of Ostrich and Global Timing Debug:
If the video doesn’t automatically embed, please try here.
Perhaps as interesting as the technology itself is the unique manner in which is was created. Here is an interview with Thad where we discuss the technology itself, and how it went from an idea in his head to functionality available to Encounter users everywhere:
Q: When did you write Ostrich?
Thad McCracken: I wrote Ostrich in my first year or so at Cadence.
Q: Why did you write it?
Thad: At the time, were doing a lot of timing-closure evals, and using both First Encounter and PKS to do physical synthesis, clock-tree insertion, etc. One of the challenges we had was tuning the various parasitic estimation engines in these tools to correlate (on average) with the signoff extraction engine of choice for the eval. We had a perl script to do this at the time, which would read two spef files, analyze them, and spit-out a recommended scale factor. But there was really no way to visualize the effect the scale factor would have when applied, or to look at things like standard deviation...an important measure of correlation. This left a lot of room for error, and missed opportunity to improve the parasitic correlation of our tools. My intent in writing Ostrich was to make it easy for users to see standard statistical analyses visually. So much can be deduced by a human being from a visual plot...many of those things are lost when the plot is reduced to just one or two numbers.
Q: How did you go about writing it?
Thad: I initially wrote Ostrich purely in Tcl/Tk, with the intent of it being a utility for internal use only. After presenting it internally, it got adopted by the field very quickly, and customers started wanting to use it as well. It was apparent very quickly that a spef parser written in Tcl was going to be too slow to give to customers, so I rewrote the bulk of the application in C, while maintaining the Tcl shell and Tk GUI. The use-model, commands, etc. didn't change at all in the rewrite...the application just got a lot faster. In this rewrite I also started distributing the utility as a standalone executable, free of requirements to have Tcl or Tk installed.
Q: Had you written C before?
Thad: I had written some C while working as a design engineer, for system-level testing, and to drive hardware I had designed and built. I had never written an application before, or been through how to extend a standard Tcl/Tk shell into a custom application. It was a lot fun, and a great learning experience for me. I don't fancy myself a software guy though....if a real software developer looked at my code, they'd probably say 'yeah...looks like a chip designer wrote that.'
Q: At what point did you decide to stop working on it and transition it over to R&D?
Thad: As the tool become more widely used by customers and AEs, and more and more enhancement requests came in, it became apparent that I would not be able to continue maintaining Ostrich, while also doing my "real job" of being an AE supporting Encounter. I therefore took some time to transition the code officially into R&D. Since that transition, Ostrich is now distributed with every shipment of Encounter (as a standalone utility), and is available for use on any platform that Encounter is available on.
Q: Let's talk about Global Timing Debug (GTD). In my view it was a complete overhaul of Encounter's timing debug capabilities. It was revolutionary- not evolutionary. At what point does it become clear that building from scratch is a better approach than refinement?
Thad: Refinement is great when the prevailing method is working, and doesn't need to fundamentally change. Building something new is often necessary when a fundamental change in approach is required. When formulating the idea for GTD, my observation was that EDA was generally doing a good job of keeping up with the physics, runtime, and capacity related challenges of ever-smaller process geometries. We all talk about having our tools ready for 65nm, 45nm, 32nm, etc. However...something that was largely ignored was the problem of helping users deal with the ever-growing design-complexity these nodes enabled, without becoming overwhelmed. In the case of timing analysis, timing reports pretty much look the same today as they did 15 years ago....there's just a lot more paths to look at now than there used to be! This left users with a mountain of data...say a huge number of failing paths, and no way to reduce that large number of paths into a quantified, defined list of problems that need to be solved in order to get to timing closure. Without that ability to do that, it's next to impossible to to predict how close one really is to closing timing...something that's unacceptable. The goal of GTD was to enable users to wade through the mountain of timing data they have at various points in the timing closure process, and to be able to reduce the data to a concise set of problems (categories) that need to be solved. Some fundamentally new capabilities were needed to enable this.
Q: What things are most unique about GTD?
Thad: The path categorization capabilities in GTD are very unique, and unlike anything provided by any other tool. GTD also has some very cool visualization capabilities in it's path histogram, in addition to a variety of techniques for visualizing problems on single paths.
Q: One of the things I like with the path categorization is that you can create categories that can't be expressed with conventional SDCs. Can you give some examples of this?
Thad: There are tons of examples...but a simple one would be a category of paths that all start and/or end with a RAM of a particular *cell type*. Another example is to categorize all the paths that have more than 10 buffers, or for which the delay of all the buffers on the path is more than 3ns. There are tons of ways to define path categorires in GTD that don't map to any conventional "path group" definition.
Q: So GTD offers a new paradigm with its category creation capabilities, what else is innovative about GTD?
Thad: Our single-path-visualization capabilities are unique, and good example of how we combined incremental improvements with brand-new technology in GTD. Virtually all tools had ways to visualize a single path..on the layout, or perhaps to see relative delays of cells and nets along a path. Our path visualization capabilities expanded on this by showing users a path's traversal of the design hierarchy on a time-axis, easy visualization of clock skew and overconstraint problems, and simple click-of-a-button indication of the SDC constraints that apply to a single path.
Q: What else?
Thad: The path histogram is unique in that it allows users to visualize the relative contribution of each path category to the global timing picture. This allows users to make data-driven decisions about which problems (categories) they need to tackle first, or that will yield the most improvement in their overall timing status.
Q: What are you most proud of with GTD?
Thad: I'm very proud of the fact that GTD was developed very quickly, and that it was immediately useful to customers on it's first release. We developed with production-quality release in mind from the get-go, and did not do a beta-release, followed by lots of bug-fixing, and an eventual ramp to production. I guess you could say we were working towards the equivalent of "first silicon success." To me, there is no other way to work. Feedback from customers after our initial production release (2005) was overwhelmingly positive...they found it useful right away...and we had very few bugs. This allowed us to focus our R&D resources on continued enhancement of the capability. I'm very proud to have driven that effort. I'm also very proud to have demonstrated that EDA tools don't always have to solve some new physics challenge, or be based on a super-complicated new algorithm, to be useful to customers.
Q: So, what's next for you at Cadence?
Thad: I've recently moved into a Product Engineering role at Cadence, working on our Chip Planning Solutions products. These products are targetted at the very early (pre-RTL, initial-conception phase) of an SoC project...a place where EDA typically does not play. We're in a unique position to reach a new user-segment, and to help them be successful in their early technical and economic estimations of a chip, without having to be physical design experts at all. I love the challenge of working with R&D and customers to combine an intuitive use model with integrated design expertise, technology and IP models into our tool in a way that truly enables better, more accurate estimations in this early phase...when all the most important decisions are being made!Question of the Day: Can you guess why Thad choose "Ostrich" for the name of the parasitic correlation tool?
Thad is genius. I really admire him.