• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Artificial Intelligence (AI)
  3. Are There Pitfalls to Embracing Generative AI in Chip Design…
Reela
Reela

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us

Are There Pitfalls to Embracing Generative AI in Chip Design?

13 Aug 2023 • 6 minute read

AI Panel Discussion

With every new technology comes a learning curve, some are steeper than others, and AI is no different. The benefits and possibilities seem endless, but as we all move down this path, a healthy dose of checking our and the AI algorithm's work is a good practice and may mitigate the pitfalls of the technology. In this final post of the series, the panel discusses the potential pitfalls of generative AI in chip design and their final takeaways for chip design engineers.

Moderator Bob O'Donnell posed this important question to the panel:

What are the potential pitfalls of applying generative AI to chip/system design?

Paul responded first, stating that generative AI operates probabilistically, as Chris had pointed out earlier, which means it may not be suitable for situations requiring deductive or analytical reasoning. Additionally, he added, there is a risk of placing too much trust in AI's outputs due to its probabilistic nature. Although relying on AI for guidance is convenient, it may only sometimes provide the best course of action. This raises societal concerns, such as the need to verify the accuracy and ethical implications of the information generated. These complex questions go beyond technical aspects and require collective exploration and resolution as a civilization.

According to Paul, generative AI will replace specific jobs, but it will also prompt human adaptation. Throughout history, there have been jobs that have become obsolete. Work is continually changing, and AI will inevitably bring about significant changes. Nonetheless, rather than completely replacing humans, AI is more likely to catalyze human involvement and collaboration, emphasizing the importance of humans in the loop rather than their outright replacement.

Chris stated that we should prioritize focusing on higher-level tasks that can advance society and individuals. While generative AI, such as chat GPT, is intriguing due to its versatility and ability to provide answers on various topics, its real-world applications are likely to become more specialized. The success of generative AI will depend on measuring its results and task-oriented efficiency using traditional methods. The challenge is applying it to most interesting problems, and failure to do so could lead to unfulfilled expectations and a limited impact on people's lives.

Igor expressed some concerns about teaching models how to perform specific skills, especially pointed skills. It's possible and impossible to teach models how to perform these skills simultaneously. One way to address this is through augmentation, like integrating Wolfram Alpha with chat GPT for arithmetic and formula manipulation and computing distances between cities. Augmentations are crucial for certain applications, like in the Matrix movie, where skills can be quickly loaded. However, presenting these skills in language models is more global than local. The entire model's weights must change to improve performance on particular or multiple tasks. This lack of localization poses a significant limitation when loading modules to teach models specific skills effectively.

Rob commented that verification is crucial in chip design, even with experienced engineers. While there is an increasing reliance on AI, the verification process is becoming more complicated and challenging to verify without AI systems verifying other AI systems. This raises concerns about whether the results will be mediocre or nonsensical or if we are genuinely pushing the boundaries forward.

Prabhal: responded that while there has been sufficient discussion on the topic, the principle of "garbage in, garbage out" still holds. In situations where accuracy is crucial, expertise is essential, and a combination of generative models and verification becomes necessary. This theme has been emphasized repeatedly. Fortunately, in the realm of chip design, it is simpler to verify the correctness of designs. We have tools to construct designs correctly and validate their accuracy. Therefore, if the representations are correct, there might be a greater chance of success than dealing with vast amounts of human-written language on the web or other similar sources.

AI Panel

Bob drilled down further on this topic with follow-up questions.

Is having specialized AI models for verification, circuit creation, system design, and other tasks beneficial? Does customization matter at a local level? Does the distinction between analog and digital impact deployment considerations?

Prabhal led off the responses with "many advancements in AI are heavily influenced by discovering the most optimal way to organize and structure information and utilizing suitable network architecture." He noted an example is Google's research on placement, which involves encoding relationships between macro cells and other components. It's still being determined whether one architecture can efficiently handle all tasks. During Cadence's early days, an open interface ecosystem was established, resulting in multiple acquisitions. A comparable situation could arise again, where open interfaces aid in creating tools with unique capabilities, structures, and networks that could eventually be synthesized.

Paul added to Prabhal's comments that there will be a significant amount of shared infrastructure for building blocks, components, and compute data platforms.

Chris added that a fascinating duality exists in how we perceive the role of large language models. On the one hand, they serve as integration platforms, enabling natural language communication with users and facilitating the incorporation of various components. They become the central matrix where multiple functionalities, such as Wolfram Alpha, search engines, and other applications, can be effortlessly integrated. This approach works best when the communication between these components occurs in relatively short text sequences.

 On the other hand, these powerful language models can be powerful tools for specific verification or architectural exploration tasks, acting as deep experts within a larger structure. These distinct roles will coexist and evolve in tandem.

Key Takeaways from the Panel

AI Panel Discussion CadenceLIVE SV

Igor emphasized that the technology landscape is evolving rapidly, with new ideas emerging monthly. Therefore, don't consider any of the negative aspects mentioned here as definitive, as there are ways to overcome them. Keep in mind that advancements can occur even within a span of two days. Nevertheless, I hope all of this information proves helpful to you.

Chris expressed concerns about the significant privacy issues associated with the chip design databases, which are highly valued and require strict protection. Trusting these databases to a centralized location raises doubts. Additionally, he emphasized the need for a specific and focused chip function rather than a general solution. Balancing specialization and general purpose raises questions about achieving a high level of privacy, exclusivity and avoiding security risks during the building process.

Paul emphasized the need for fearlessness. He recalled Tom  Beckley mentioning it in his CadenceLive keynote, urging everyone to courageously venture into this new world. According to Paul, this transformation will enable us to accomplish unprecedented feats. Despite the possibility of encountering resistance, he believed we should fearlessly pursue and fully embrace this opportunity. However, Paul acknowledged that change can be challenging, and we must alter our approach and mindset to adapt to the advancements in AI. He used the example of not utilizing predictive text on his phone, highlighting the importance of consciously adopting new technologies. He concluded by motivating everyone to embrace this change and act without hesitation collectively.

Prabhal's takeaway point was to pursue the ambitious agenda and vision simultaneously and not forget about the Gartner hype cycle. He noted that we should be mindful of reaching the peak of disillusionment soon.

Rob expressed his alignment with the other panelist's views, emphasizing the rapid pace of progress. He noted the need to explore novel approaches to tackle specific challenges and harness technology to enhance our efficiency and drive significant design advancements, particularly in space exploration.

If you missed the chance to attend the AI Panel discussion at CadenceLIVE Americas 2023, don't worry, you can register at the CadenceLIVE On-Demand site to watch it and all other presentations. 

You can also read the previous series posts here:

Part 1: Revolutionizing Product Development and User Experience: The Transformative Power of Generative AI

Part 2: Unraveling Generative AI Risks in Chip/System Design

Part 3: Transformative Potential of Generative AI: Alleviating Talent Shortages and Diversifying Chip Design


CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information