Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
Verification Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
The Cadence Academic Network helps build strong relationships between academia and industry, and promotes the proliferation of leading-edge technologies and methodologies at universities renowned for their engineering and design excellence.
Participate in CDNLive
A huge knowledge exchange platform for academia to network with industry. We are looking for academic speakers to talk about their research to the industry attendees at the Academic Track at CDNLive EMEA and Silicon Valley.
Come & Meet Us @ Events
A huge knowledge exchange platform for academia. We are looking for academic speakers to talk about their research to industry attendees.
Americas University Software Program
Join the 250+ qualified Americas member universities who have already incorporated Cadence EDA software into their classrooms and academic research projects.
EMEA University Software Program
In EMEA, Cadence works with EUROPRACTICE to ensure cost-effective availability of our extensive electronic design automation (EDA) tools for non-commercial activities.
Apply Now For Jobs
If you are a recent college graduate or a student looking for internship. Visit our exclusive job search page for interns and recent college graduate jobs.
Cadence is a Great Place to do great work
Learn more about our internship program and visit our careers page to do meaningful work and make a great impact.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
Overview All Courses Asia Pacific EMEANorth America
Instructor-led training [ILT] are live classes that are offered in our state-of-the-art classrooms at our worldwide training centers, at your site, or as a Virtual classroom.
Online Training is delivered over the web to let you proceed at your own pace, anytime and anywhere.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
At this year's Embedded System Conference in San Jose there was a panel with the title Who's Taking over Whom - Is EDA Moving into Embedded or Embedded into EDA?
One of the analogies Mike McNamara from Cadence used was hardware and software engineers on opposite sides of a river wondering how to construct a bridge to the other side while sharks swim in the water separating them. It reminded me of a song from a group called Point of Grace titled "The Great Divide" (track 8 on the link). Part of the chorus says:
There's a bridge to cross the great divideThere's a cross to bridge the great divide
One practical place I see the missing bridge on a daily basis is the difference in computing platforms used in each domain. The majority of embedded software engineers use Windows to edit, compile, and debug software. Most hardware and verification engineers use Linux machines connected by a network. Even if they have a Windows desktop, they use things like VMware, VNC, X servers, and various other software to access Linux machines from a Windows PC on their desk.
If I attempt to step back and figure out why the world is this way, I can make some educated guesses, maybe readers can help identify additions reasons.
When the embedded software engineer arrives at work for his first day he sits down and finds a Windows PC on the desk. His assignment is to write embedded software for some box, board, or device so his first instinct is to get a CD-ROM, put it in the drive, and install the cross-compiler for the target CPU. He also installs his favorite editor and gets right to work. Later he may add a JTAG box to connect to the target system and debug or maybe he does this in a lab with a different Windows PC that is provided in the lab. I can certainly understand this behavior, the obvious thing to do if you are given a Windows PC is to just use it rather than ask more questions. Almost every cross-compiler used for embedded software today works on both Windows and Linux, but the majority of usage is probably on Windows. It would be great to have somebody in the Embedded Software Tool business confirm this.
On the other hand, the first thing a hardware engineer does when he shows up for work is to start asking about which Linux machines on the network he should login to and how to setup the environment to run EDA tools. Most companies have scripts to configure tools, pick among multiple versions of each tool, and run whatever combination of tools engineers need. If there is a Windows PC on his desk it's assumed to be for writing specs with a word processor, doing e-mail, and web browsing. Most EDA tools used for ASIC and SoC design run only on UNIX based platforms (I mostly say Linux, but I acknowledge there are others in use like Solaris and AIX). There are some EDA tools that run on Windows, especially in the FPGA and PC Board areas, but nothing that would constitute a complete chip design flow. In the past there have been Windows versions of popular EDA tools for ASIC design, but they never seemed to get enough momentum to make them sustainable.The history of chip design is based on UNIX, first Apollo workstations, then Sun, then Linux on x86. I still recall the hype of Windows NT and how all of EDA would move to Windows NT since it was a viable computing platform providing SMP support for applications that required higher performance and reliability (compared to Windows 95), but Windows never really gained much traction in the synthesis, simulation, and verification areas.
The result is a situation I see frequently where software engineers edit and compile on Windows and hardware design and verification engineers work on Linux. As the SoC design process has evolved, the need to build the bridge between hardware and software raises immediate questions about how to structure the computing environment to connect the two groups. This was first apparent as verification engineers started to run software in a logic simulation environment.This required software compiled on Windows to be moved to Linux and executable files processed by some utilities to get files to load into HDL memory models using things like $readmemh in Verilog. Most companies did this using network drives so software engineers could mount a drive from their PC and put the $readmemh files on a server that the Linux machines could also see. This worked fine for awhile since the only thing that was passed back and forth was the memory contents for the software program to run.
As more functionality was needed to actually debug the software using something other than a waveform viewer things became more difficult. A software executable normally has paths embedded in it's debug information about where to find the source code. To actually debug a software program the source must also be copied over to the area visible from Linux, but still the debugger will struggle with the Windows paths. Most debuggers have ways to map the paths and find the new location of the source, but there is always some headache involved to set it all up.
As new verification tools like ISX have evolved, this puts additional stress on the environment. Like a software debugger, ISX utilizes information from software executables to automatically create a verification environment for constrained random testing of the software and the combination of hardware and software. To give you a feel for it consider the dwarfdump utility. Dwarfdump is an example of a utility that can extract lots of interesting information from the dwarf debugging information in an executable (similar to what a software debugger does).
% dwarfdump -l test.elf
will give output about the source code line numbers and how they map to instruction addresses like this:
.debug_line: line number info for a single cuSource lines (from CU-DIE at .debug_info offset 286):<source> [row,column] <pc> //<new statement or basic block/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 36,-1] 0x1cc // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 40,-1] 0x1d4 // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 42,-1] 0x1e4 // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 45,-1] 0x1f0 // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 46,-1] 0x204 // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 47,-1] 0x24c // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 50,-1] 0x258 // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 56,-1] 0x26c // new statement/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c: [ 57,-1] 0x284 // new statement
In the case of an executable compiled on Windows the output may be something like:
.debug_line: line number info for a single cuSource lines (from CU-DIE at .debug_info offset 11):<source> [row,column] <pc> //<new statement or basic blockD:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c: [ 54,-1] 0xbfc003c0D:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c: [ 55,-1] 0xbfc0042c // new statementD:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c: [ 56,-1] 0xbfc00430 // new statementD:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c: [ 60,-1] 0xbfc00438 // new statementD:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c: [ 61,-1] 0xbfc0043c // new statementD:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c: [ 62,-1] 0xbfc00448 // new statement
The source files have DOS paths with things like D:\ as well as the back slashes and spaces in the file names. Like a debugger this path mapping can be done, but it adds extra complexity. Most Cadence tools use TCL as the command language. TCL is great but extra work must be done to allow for the possibilities that the software may have been compiled on Windows or Linux. For example, the TCL file command provides some help, but is not really platform independent in dealing with path separators. Anyway, this is just one example of inefficiency caused by the lack of a bridge between the hardware and software engineers.
Now, as we enter the era of the Virtual Platform, things are going to get even more difficult. Consider a situation where a project team has developed a SystemC TLM2 model of the hardware system to be used for embedded software development. The software engineer is going to ask to run it on Windows. This may be possible using the OSCI reference simulator, but in most cases there are other dependencies that complicate things (beyond the fact that the OSCI simulator doesn't have much in terms of debugging, tracing, and visualizing what is happening in the platform). Maybe there is an IP model for a processor that is not available for Windows. Maybe the platform reuses some verification code or an RTL design block from the HDL simulation environment that cannot run on Windows. Maybe the software engineer asks to run the software debugger on Windows and connect to the simulation over the network so he can stay at his PC, but run the simulator on a remote Linux machine that is shared with the hardware team, but has never logged into Linux before. Maybe the solution is to give the software engineer a VMware Appliance with all of the simulation tools and design data all installed and ready to run so there is nothing to setup and it all runs right on the Windows desktop.
Many of these combinations are possible depending on the tools, models, and project details. There are many IT solutions available to bridge these gaps, but my point is that all of them take time and add complexity. The EDAC Consortium does have platform road maps for EDA and certainly Cadence has a supported platform matrix and road maps, but this most basic difference between software engineers and hardware engineers seems to come up over and over again. As soon as anybody mentions a product targeting embedded software engineers right away sirens start whooping about how everything must be on Windows (and embedded software engineers don't pay anything for tools). It's clear this era of separate hardware and software teams is coming to an end. It's also clear that software engineers will not be able to manually iterate through the code, compile, run, debug loop in isolation on the desktop forever. I outlined much of this in a previous post about how Virtual Platforms are used for embedded software development.
My preference would be for embedded software to adopt a model more like hardware design and verification, but unfortunately I cannot make this decision. Does your company have this gap between Windows and Linux? How do you address it? What kind of bridges do you build? What other changes or solutions do you see on the horizon?