Home
  • Products
  • Solutions
  • Support
  • Company

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  • Products
  • Solutions
  • Support
  • Company
Community Blogs Corporate News > Cadence Creates Industry’s First LLM Technology for Chip…
Steve Brown
Steve Brown

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Discover what makes Cadence a Great Place to Work

Learn About
featured
LLM
cadence.ai
Generative AI
GenAI

Cadence Creates Industry’s First LLM Technology for Chip Design

11 Sep 2023 • 4 minute read

In the nine short months since OpenAI brought ChatGPT (a Chat Generative Pre-Trained Transformer) and the phenomenal concept of large language models (LLMs) to the global collective consciousness, pioneers from every corner of the economy have raced to understand the benefits—and the pitfalls—of deploying this nascent technology to their particular industry. And as it turns out, semiconductor chip design is a perfect candidate.

Cadence is no stranger to generative AI, of which LLMs are one aspect. Our broad generative AI portfolio spans system design from chip to package to printed circuit board (PCB), with the Cadence.AI Platform at its heart. Our current applications focus on chip design optimization, automation, and acceleration in the later stages of implementation. Yet it’s in the initial, human-led design process where bugs and bottlenecks are most likely to occur—and we can put LLM strengths to good use.

Today, we are making public the first robust proof of concept of an LLM in chip design. Another chatbot in the corner of the screen? Yes, but this LLM is so much more than a modern-day Clippy. To focus on this LLM’s conversation skills would be to misunderstand just how powerful this technology stands to be in solving some of chip design’s most pressing challenges—automating the workflow to reduce errors introduced by humans in creating the design specification, the design itself, and all the project documents needed to create a complex semiconductor device.

An Ambiguous Problem

The starting point for the design of any silicon chip is a high-level hardware and software specification, described in a natural language such as English, to capture as much detail from as diverse a set of engineers as possible.

It is then the engineer's job to take this natural-language specification, with all its potential ambiguity and variation in style, level of detail, and so on, and translate it into code written in a hardware description language (HDL) such as Verilog or VHDL. It’s also their job (or perhaps that of another engineer) to generate connection lists used by verification tools to check that all is as expected systematically. This process is repeated for all the functionality in the chip hardware. While it uses automation, it remains a highly intensive human process: people creating and checking, using tools where they can.

Writing good HDL code from a specification targeting today’s advanced process nodes with stringent power, performance, and area (PPA) requirements would take a single engineer years to complete. A team might achieve it in months. Yet teams are becoming stretched thin as the semiconductor skills gap continues to grow—with a projected shortfall of 67,000 technicians, computer scientists, and engineers in the United States by 2030, according to the Semiconductor Industry Association.

As a result, design, creation, and verification processes have seemingly plateaued—with more bugs surviving to final silicon. Design teams must mitigate these bug escapes in software, disabling buggy features in an often futile attempt to avoid a hugely expensive design respin.

Industry’s First Chip Design GPT

Icons of documents combining with an artist painting

Figure 1: LLM Combining Documents with a Chip Artist

Today, our Cadence ChipGPT LLM proof of concept focuses on the design-cleaning process. Once the first draft of the HDL description of the chip is created, engineers can use the LLM chatbot to interrogate the design, validate the design versus specification, explore and rectify issues, to prompt analysis tasks, and receive explanations in their natural language. They can do spec reviews, code reviews, test reviews, and change management reviews. This can save hundreds of hours of individual engineering time, and hundreds of group meetings for specification and code reviews. And remove many bugs previously uncovered during the ramp-up to regression verification.

LLM in Cadence Generative AI Solution

Figure 2: LLM extension to Cadence.AI Platform

The essential usage of the Cadence.AI LLM is to load in the architecture specifications, design specifications, integration connection specifications, and the design itself. From there, the users can issue prompts to the Cadence.AI LLM such as "list the name of irregular nets", "list all possible irregular pins", automate hook up testbenches, tool script auto-completion, and RTL code auto-completion.

Cadence also recognizes the need to maintain the highest levels of data security when giving AI algorithms access to classified IPs. Our LLM implementation runs entirely on-prem, with all data stored and processed in the Cadence.AI platform within the Enterprise firewall. The LLM processes run on the customers’ server infrastructure, whether CPU- or GPU-based.

We intend to grow the Cadence.AI platform’s LLM capabilities as the project evolves, including potentially expanding the LLM to enable the generation of verified HDL code from a natural language specification in an IP-protected way. As LLMs are trained on vast amounts of data in natural languages such as English, they are spectacularly good at reading, evaluating, and summarizing information intended for humans. When applied to this first area of cleaning designs and specifications of errors, the Cadence.AI platform has demonstrated the value of new workflow automation that can reduce the engineering time required by an order of magnitude while finding issues introduced through human ambiguity much earlier in the project.

Figure 3: Renesas LLM proof-of-concept perspective

The Cadence ChipGPT proof of concept is the first step in what is likely to be a long process for deploying LLMs in chip design. Yet customers using the JedAI platform have already demonstrated hugely promising results from this proof of concept through significantly reduced time from specification to final design and greater design control than ever before.

Read what Forbes has to say about Cadence.AI LLM

Learn more about how Cadence uses generative AI in its products


CDNS - RequestDemo

Have a question? Need more information?

Contact Us

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information