Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
A few weeks ago we had a crisis at our house. My son managed to delete the data from my daughter's USB memory stick. Not only did he delete it, but he did it in such a strange way I have no idea what he could have done. She was not too happy since all of her recent school work was on the memory stick. My best guess is that he deleted it by mistake, recognized what he had done, went into the recycle bin, but instead of restoring the files, tried to manually copy the files back to the stick and ended up with strange links instead of the real directories and data.
Fortunately, I was able to recover some of the data. The best way I found was to use dd to make an image of the memory stick on my Linux machine and mount the image using the -o loop option. Then I used a program called foremost to get back some of the data. I also learned that OpenOffice (or LibreOffice) files, such as .odt, are nothing more than zipped XML files. So when I was able to recover a bunch of .zip files I found out they were actually the .odt files I was looking for.
After I did this I realized that using mount -o loop was the same thing I do all of the time to create image files for Virtual Platform simulations for USB memory sticks and SD cards. I have found mount -o loop to be a great feature to be able to mount images and create the data I want before I even start up the simulator. This means it is very fast to copy files to and from the USB device image or SD card image at the speed of my host machine and beats using the Ethernet model to copy data in and out of a Virtual Platform simulation.
One of my previous articles about how to use Linaro file systems actually gave all of the details about how to use dd and mount -o loop to create the SD card image for running a Linaro file system.
After I wrote about how to use an NFS root file system with a Virtual Platform to be able to modify the file system on the host machine and have the changes show up immediately in the simulation, I wondered if it would be possible to use mount -o loop to achieve the same feature without the slowdown associated with the NFS mount.
Unfortunately, I didn't have very good luck with this attempt. I found that when I used mount -o loop to mount the file system image on my host machine and then run the Virtual Platform simulation and mount the same image file, I could not change the file system on either side and have the results be immediately visible by the other. I found that if I would make a change from inside the Virtual Platform, I could not see the change on my host machine, but if unmounted the image and remounted, then the change would appear. Too bad this didn't work out. I don't want to have to unmount and re-mount to see the changes so I abandoned my attempt to get both sides to sync. I will give bonus points to anybody who knows a secret switch or option to make it work.
When I first started building Linux Virtual Platforms I started by learning how to cross compile software such as BusyBox and other utilities and how to make a root file system myself. This is great for a very small system, but is painful as the number of software programs to install grows, especially since some software is not setup for easy cross compiling. I tried some experiments with things like buildroot to automatically configure and cross compile software. It was OK, but somewhat difficult since learning how to compile software was not my primary goal. I just wanted interesting software to run on the Virtual Platforms I built.
Finding pre-built software like the Linaro file system is great since everything was already compiled and tested together, but it has one problem. What if I want to add more packages? For a Ubuntu based system like Linaro I can always use apt-get once I start up the simulator and login to Ubuntu, but it is always faster to do things on the host machine compared to the simulated machine.
My most recent discovery was a something called qemu-debootstrap and chroot to build a root file system for Ubuntu with just the packages I want.
Let's see some of the basics on how to create the file system.
First, make a directory for the new file system and populate it. I was preparing for ARM Techcon so I used the latest version of Ubuntu, 12.10.
My host machine is Ubuntu 12.04. If you don't have it already, install debootstrap:
Now take a break as the file system is populated.
Eventually, you should see a message like this:
I: Base system installed successfully.
Next, edit the file quantal/etc/apt/sources.list to contain the following 2 lines and use chroot to update and install more packages:
# apt-get update
Now wait some more time while the updates are processed.
Install more packages as you like:
# apt-get install ssh fluxbox gcc
I saw some warnings about locale such as:
/usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory
I recommend this command to fix it:
# locale-gen en_US.UTF-8
Add new user accounts:
# adduser jasona
Enable the user to have sudo privilege
# adduser jasona sudo
Another idea is to edit /etc/hostname to give the machine a new name, otherwise it will be the same as the host you created it on.
Now that the file system is ready to try, there are are a couple of platform specific things to fix before trying to boot the new file system.
First, set up the network for Virtual Platform to have a static IP address by editing the file /etc/network/interfaces to look like this
Next, edit the file /etc/resolv.conf to get the DNS server. You may have the DNS info from your host machine already in there (which is no good) for the Virtual Platform. The file should have just 1 line:
Now, we need support for the UART tty files in /etc/init. The device names for the UARTs are usually platform specific according to the UART device driver. For the Zynq Virtual Platform, the two UARTs are known as PS1 and PS1
# cd etc/init# cp tty1.conf ttyPS0.conf
Now edit ttyPS0.conf to set the device node to ttyPS0 and crank up the speed to 115200.
Now copy the same file to ttyPS1.conf and edit to specify PS1 instead of PS0.
# cp ttyPS0.conf ttyPS1.conf
We are almost to the end. Exit the chroot and create a new image file for the SD card model and copy the file system to the mounted image. I found it very helpful to create the username (jasona in my case) to be the same as one on your host machine to avoid problems with user and group permissions.
A 1 Gb file will hold all of the data for this file system.
$ dd if=/dev/zero of=quantal.img bs=1024k seek=1024 count=0$ /sbin/mkfs.ext2 quantal.img$ sudo mount -o loop quantal.img /mnt$ cd /mnt$ sudo cp -r -p /home/jasona/quantal/* .$ sudo umount /mnt
Give it a try on using the Virtual Platform for the Xilinx Zynq FPGA.
Make a new configuration file and set the sd card image to quantal.img
Make sure to edit the Linux device tree file to set the root file system for the SD card
I can even compile a program in the simulated machine using gcc.
From the start I realized was that I was fetching ARM binaries to install into a directory tree, but it took a few minutes to figure out that the programs I was running in the chroot, such as the shell commands and the apt-get commands, where actually ARM binaries that were working because of the QEMU user-mode emulation for ARM. I have run qemu system-mode emulation many times, but didn't have much experience with user-mode until now. Amazingly, it worked for most packages I installed. On some occasions it crashed during the package installation, but I was able to run my Virtual Platform and use apt-get commands to fix the errors and finish the package installations.
In summary, mount -o loop is very cool and the ability to create a Ubuntu file system for ARM from scratch using chroot and customize it with just the right packages and do it on the host machine is great. It saves a lot of time compared to cross-compiling software yourself.