• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Artificial Intelligence (AI)
  3. Unraveling Generative AI Risks in Chip/System Design
Reela Samuel
Reela Samuel

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us
CadeneceLIVE
LLM
cadence.ai
Generative AI
ai-driven
GenAI
ai in eda
AI

Unraveling Generative AI Risks in Chip/System Design

19 Jul 2023 • 7 minute read

When masterminds discuss ideas, possibilities, and perspectives, their collective intelligence unleashes a creative synergy that can lead to groundbreaking innovations and solutions. The CadenceLIVE Silicon Valley event's Generative AI Panel offered attendees an opportunity to tap into this creative synergy. For those who missed the panel discussion, this is our second post in a four-part series covering the panel.

In the first post in this series, we covered the panel’s opinion about how integrating generative AI capabilities in EDA tools is transforming chip design.

Moving forward to the next question, moderator Bob said,

“If ChatGPT hallucinates a fact somewhere, you can double-check it, and you might be okay. But if a ChatGPT equivalent hallucinates different elements of a circuit, that seemingly could be a big, a much bigger issue.”

He said he was intrigued by the philosophical implications of how such factors influence the application of AI in chip development. He tossed his second question to the pane.

Handling possible errors and their impact on chip development

Paul called out to answer this. He acknowledged what Chris had mentioned earlier, accurately highlighting the problem’s nature as being probabilistic. Paul expanded on this idea, pointing out that there are numerous such problems where the implementation of AI can significantly enhance productivity. For instance, it is indeed feasible to utilize AI for fine-tuning settings in the P&R (place and route) tool to achieve better power, performance, and area (PPA), but not advisable to utilize generative AI for gate synthesis due to the insufficiency of achieving accuracy rates of only 90%. Another use case example is AI can also assist in identifying the root causes of bugs. Therefore, there is an adequate scope of application for level one.

Paul continued, responding to an earlier comment regarding intellectual property (IP). He said it's quite intriguing. There is a significant distinction between training models using publicly available data and training them using proprietary knowledge or insights considered part of one’s IP. He believes that there are two perspectives to consider. First, partially or minimally trained technologies can be provided, allowing customers to further train these technologies according to their specific needs and proprietary information. Adopting this approach keeps the training knowledge with the customer, ensuring their particular use cases and operations are effectively addressed. Paul added that during his keynote, Cadence CEO Anirudh Devgan mentioned that running tools daily could accumulate massive amounts of data over time. This data is consolidated within the AI-enabled big data analytics Cadence JedAI Platform, providing valuable training data for customers. Cadence developed numerous in-house test cases and designs, which Cadence has permission to use for product development and refinement. Cadence leverages these internal suites by conducting internal tests and fine-tuning the algorithms. Utilizing what it already has in-house, along with the privileges provided by the ecosystem, Cadence can pre-train a default AI system as version 1.0 to deliver to customers. Customers can then build upon this foundation according to their specific requirements. This approach is not fundamentally different from what Cadence is already doing; simply applied for a different purpose.

Chris added that large language models (LLMs) offer a unique opportunity to analyze data in different ways. While they excel at language-related tasks, they can also learn structures, abstractions, and internal grammar from massive datasets enabling valuable insights and inferences. Chris continued that while GPT-4 may not directly apply to chip databases or other non-language contexts, training these models on massive datasets unlocks their power to comprehend and capture inherent structures, from grammar to storytelling. This understanding can be applied to constructing large sequence, 2D, or high-dimensional models, making them highly predictive and facilitating reasoning based on specific data characteristics. Thus, the value of large models extends beyond language itself.

Prabal sparked enthusiasm among the audience by posing the question, “What makes a Picasso a Picasso? Am I referring to the artist, or am I referring to the work on the canvas?” He mentioned that machine learning tools enable us to understand the mapping process within a given dataset, uncovering structures and representations. Presently, there is a struggle to truly comprehend the methodologies and processes involved in creating a Picasso painting. However, with machine learning tools, it will be possible to unravel the intricate mapping process by analyzing the data. This entails inferring the structures and representations of a design, as Chris mentioned. What’s particularly fascinating is that super-linear learning can be achieved with access to a sufficiently extensive corpus, whether internal or, ideally, encompassing the entire world’s corpus. In other words, it will be possible to generate chip designs that rival those of the best designers, incorporating their finest ideas. While intellectual property concerns exist, the potential is immense.

Igor emphasized his view on the matter of applications. He described that until now, the most remarkable success stories have centered around five primary types of data: text, images, videos, sound, and code. You are in a favorable position if contemplating applications based on these data types. So long as you can validate the outcome, perhaps even without detailed verification, you are still on the right track. In the case of modifying the circuit, employing an equivalence checker that can swiftly assess whether an equivalence exists can be invaluable. Even if it provides the correct outcome only one out of 10 times, it is still deemed acceptable.

However, Igor continued that once these two limitations are surpassed, such as dealing with different data or incorporating physics, it becomes essential to consider transfer learning. While some physics effects may be described in the text, ChatGPT might offer an incorrect response if you pose a physics-related question, as it needs a model of physics. Consequently, on the verification front, Igor anticipates an increase in verification techniques that assess the outcomes of generative AI within and outside EDA’s realm in the next two years. This trend will extend beyond the realm of EDA and become prevalent across various technology stack layers.    

Paul shared his thoughts on Prabal’s point regarding AI potentially leveling the playing field between the less experienced and the more experienced, as well as the potentially more talented individuals. Paul believes that it's not a fair comparison to pit highly experienced and talented individuals against younger, less experienced ones who are utilizing AI. He thinks what happens is that experienced individuals leverage AI to free up their cognitive resources. This allows them to advance to a higher level and potentially focus more on intricate, difficult-to-detect bugs or explore new architectural possibilities. While there may be a temporary leveling effect, the more experienced individuals will eventually emerge on a new plane of advancement and abstraction, effectively utilizing AI. It's a process of reinventing ourselves as humans at this new level of enlightenment and abstraction, where AI is a tool we leverage, instead of the current scenario.

Paul added that he strongly believes AI will not eliminate the need for humans in the loop. Instead, it will enhance human capabilities and enable us to make meaningful changes. AI can enable us to create chips that were previously impossible to manufacture and achieve unprecedented levels of sustainability and performance. It acts as a catalyst, empowering humans to accomplish tasks and advancements that were previously unattainable.

Rob expanded on Prabal's point regarding the current state of engineering and how the models are utilized to build upon it. He thinks these tools will focus on optimizing cycle time to enable efficient analytics and effectively measure success based on specific criteria. Rob added that although the common practice involves implementing a design through a PNR for PPA considerations, there are additional aspects that designers are now aware of. He thinks that with micro-architectural knowledge, we can explore various techniques, such as memory placement and butterfly connections, which the algorithm alone may not capture. Consequently, we monitor wire length and other metrics derived from the database to ensure optimal results. While machine learning (ML) emphasizes a few key aspects of PPA, it does not guarantee that the best solution is achieved. Determining the ideal memory latency, connectivity, and other factors requires additional analysis. He continued that these tools are expected to provide more flexibility, enabling us to develop new analytics and measure success based on specific criteria. From his perspective, designing an interconnector and a CPU presents distinct challenges, and training AI to comprehend these challenges goes beyond mere performance metrics.

Can generative AI-based tools serve as a catalyst for individuals interested in pursuing circuit or chip design, potentially addressing the talent shortage in this industry? Furthermore, can these tools enable the creation of diverse chips, expanding the possibilities in chip design? Read the next post Transformative Potential of Generative AI: Alleviating Talent Shortages and Diversifying Chip Design (part 3) to learn more about it, as well as Are There Pitfalls to Embracing Generative AI in Chip Design? (part 4).

If you missed the chance to attend the AI Panel discussion at CadenceLIVE Americas 2023, don’t worry, you can register at the CadenceLIVE On-Demand site to watch it and all other presentations.

Learn more about the Cadence Joint Enterprise Data and AI (JedAI) Platform, an AI-enabled big data analytics platform facilitating smarter design optimization and enhanced productivity.


CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information