Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
The Cadence Academic Network helps build strong relationships between academia and industry, and promotes the proliferation of leading-edge technologies and methodologies at universities renowned for their engineering and design excellence.
Participate in CDNLive
A huge knowledge exchange platform for academia to network with industry. We are looking for academic speakers to talk about their research to the industry attendees at the Academic Track at CDNLive EMEA and Silicon Valley.
Come & Meet Us @ Events
A huge knowledge exchange platform for academia. We are looking for academic speakers to talk about their research to industry attendees.
Americas University Software Program
Join the 250+ qualified Americas member universities who have already incorporated Cadence EDA software into their classrooms and academic research projects.
EMEA University Software Program
In EMEA, Cadence works with EUROPRACTICE to ensure cost-effective availability of our extensive electronic design automation (EDA) tools for non-commercial activities.
Apply Now For Jobs
If you are a recent college graduate or a student looking for internship. Visit our exclusive job search page for interns and recent college graduate jobs.
Cadence is a Great Place to do great work
Learn more about our internship program and visit our careers page to do meaningful work and make a great impact.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
Overview All Courses Asia Pacific EMEANorth America
Instructor-led training [ILT] are live classes that are offered in our state-of-the-art classrooms at our worldwide training centers, at your site, or as a Virtual classroom.
Online Training is delivered over the web to let you proceed at your own pace, anytime and anywhere.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
Many systems on chip (SoCs) have a "camera block" or image signal processor (ISP) that takes raw data from an image sensor and manipulates that data. But ISPs are moving away from their traditional role and turning into "vision subsystems," according to Peter McGuinness, director of multimedia technology marketing at semiconductor IP provider Imagination Technologies.
McGuinness was the keynote speaker at IP Talks!, a three-day program of presentations at the ChipEstimate.com booth at the recent Design Automation Conference (DAC 2014). His half-hour speech was titled "Visuals to Vision: The Changing Role of the Image Sensor." A video is available of this and other IP Talks! presentations (log-in required, quick registration if you don't have one).
Peter McGuinness of Imagination Technologies presents at IP Talks! at DAC 2014
McGuinness first looked at the traditional role of the ISP. It has a set of familiar functions—it takes raw image data from the sensor, and manipulates it to fix defects in the sensor or in the CMOS process. It then uses both hardware and software to produce a good image. "That's the classical role of the ISP, but it's changing," McGuinness said. "It's moving away from producing images and becoming a vision subsystem." As such, he explained, the image sensor is now a source of data for processing later in the pipeline.
Distributing a workload
As McGuinness noted, applications for imaging are expanding rapidly—imaging is no longer just a question of producing nice videos or photos. Automotive electronics provides a good example. Here, imaging is (or will be) used to help drivers back up, avoid collisions, stay in their lane, and recognize street signs. Imaging also has new applications in retail sales, where a store may have an unattended kiosk and cameras may be used for facial recognition.
Applications such as these use workloads with a lot of data parallelism. That means you can take a workload and distribute it in the system, making use of GPUs and CPUs as well as ISPs. The result is better performance for a given power envelope. As McGuinness noted, vision software is heterogeneous, and intelligently combining heterogeneous compute resources enables the most differentiation at the lowest cost.
So what to run where? A CPU is good for non-parallel, serial code that has a lot of branching, McGuinness said. Typically this code will be single-threaded or have a low number of threads. The code needs only small amounts of data but can make critical decisions. A GPU, in contrast, is good for very large data sets with parallel operations. Parallelism occurs not only because of wide-word data sets, but because many operations are related to adjacent pixels.
Envisioning a "vision system"
A typical camera subsystem today will have a separate CMOS image sensor chip. An imaging pipeline will take the raw data and output YUV data that can be sent to an applications processor on the SoC. However, McGuinness observed, about all you can do with that YUV data is operations like autofocus and white balance. "You've really removed a lot of information," he said. "There is an opportunity to produce information from the raw data that is usable in a vision system."
An "integrated" vision system, in contrast, does not wait for YUV data—it takes in raw data from the CMOS sensor. The ISP pipeline performs traditional operations like bi-pixel fixing, tone mapping, and correction for lens distortion. However, functions like focus statistics, white balance statistics, and exposure statistics are not handled in the ISP itself but are sent over to main memory, where they can be picked up by the CPU or the GPU depending on the amount of data involved.
The ISP is moving on to the chip, but is also changing in other ways. "The ISP is collaborating with the other compute resources that are on the SoC, and that changes the nature of the game," McGuinness said. "It means you can do customized things in software that you could not do earlier when the imager was on a separate chip and you were just presented with a finished image."
Enabling new functionality
McGuinness identified two "areas of function" that were not available in the traditional camera pipeline. One is computational photography, which makes it possible to take a number of different images and interpolate to produce a single, improved image. Another is the use of camera arrays to provide different viewpoints and to manipulate depth of field. "Essentially, you can create the picture after you've taken the raw image that needs to be processed."
McGuinness briefly described the Imagination Technologies PowerVR Heterogeneous Vision Platform and the Raptor camera ISP that it uses. The platform can include CPUs, GPUs, and video encoders. To reduce power, McGuinness observed, Imagination worked hard to reduce the number of memory transactions by allowing the Raptor ISP to send images directly to the video encoder. This kind of optimization, he said, shows that Imagination is "really a systems company, not just an IP block company."
To view the video replay of the McGuinness keynote, click here.
To see a listing of all the available videos from IP Talks! 2014, click here. You will be asked to log in or register. Once logged in, you can also view IP Talks! 2014 video presentations from speakers from ADICSYS, Argon Design, ARM, Cadence, eSilicon, Ferric Semiconductor, GLOBALFOUNDRIES, Methodics, Mixel, Open-Silicon, Sidense, Silab Technologies, Synopsys, and True Circuits.
In a short video interview following the keynote, Sean O'Kane of ChipEstimate.com TV and McGuinness talked about GPU computing, the cost of wearables, and support for always-on applications.To view that video, click here.
Note: A July 2014 "Tech Talk" article at ChipEstimate.com describes Imagination Technologies' PowerVR Rogue GPUs.
Related blog posts
DAC 2014 Keynote: Imagination CEO Charts New Opportunities for Semiconductors
DAC 2014: Semiconductor IP Trends Revealed at "IP Talks!"
CDNLive: Envisioning the Future of IP-Driven System Design