Never miss a story from Corporate and Culture. Subscribe for in-depth analysis and articles.
EDA in the cloud is on the cusp of mass adoption. Semiconductor companies big and small are today embracing SaaS EDA design and discovering significant productivity and scalability benefits in doing so.
This wave of cloud adoption is being driven by a marked increase in chip complexity, as the industry moves towards advanced nodes and pursues ever-greater power, performance, and area (PPA), higher bandwidth and lower latency. Today’s advanced applications such as 5G, automotive, high-performance computing (HPC) and artificial intelligence (AI) require an exponential increase in compute capacity to design and bring the next generation chips to market.
This is compounded even further when designing on the latest process nodes and advanced packaging techniques. As we head towards 3nm and below, compute infrastructure requirements increase by multiple orders of magnitude which in-turn necessitates the need for more advanced chips. It’s a virtuous cycle that is driving the overall need and it’s easy to see why EDA in the cloud is rapidly becoming a necessity – even for companies with near-unlimited on-prem capacity.
Cadence has been leading this journey of EDA in the cloud, from VCAD in the early 2000s to our 2018 expansion to Cadence Cloud and then CloudBurst SaaS platform in 2019. Traditionally, it’s been startups and small to medium size companies that have benefitted most from these products – those with immediate, short-term projects or peak usage needs without their own on-prem or hybrid EDA infrastructure.
Yet when we launched the Cadence OnCloud SaaS platform for system design and analysis in early 2022, we saw unprecedented levels of interest from larger companies looking for ways to shift many of their EDA workflows to the Cloud. These were companies now designing very complex chips on advanced process nodes and discovering that they were pushing the limits of their on-prem capacity – resulting in bottlenecks and unpredictable design cycles.
In many cases, these companies’ IT organizations had carefully calculated the server, storage, and network capacity they needed, then added what they believed to be a respectable overhead in order to future-proof their operations. What they couldn’t have foreseen is the incredible demand for compute that advanced use cases now require.
A few years ago, their only option would have been to physically add more servers – a process of specification, purchase, installation, and provisioning that would inevitably take months. Yet with a hybrid approach to EDA in the cloud that augments on-prem equipment with cloud compute, these companies now have access to almost unlimited capacity during peak usage or specialized compute. And just as that capacity can be switched on almost instantly, it can be switched off again just as easily.
It’s not just a question of EDA in the cloud providing the capacity to handle advanced, hugely complex chip designs. Companies are discovering that the performance of their current EDA tools increases by an order of magnitude, too – everything just works that much faster. With the next generation of processors available instantly only in the cloud, they’re able to increase engineering productivity and slash costs and project timelines. Even a 10 percent reduction in verification time might accelerate time-to-market considerably, resulting in millions of dollars saved.
This is particularly true for startups and small enterprises without their own infrastructure. For those who would otherwise have never afforded the compute required to realize their designs, the ability to run massively compute-intensive workloads in the cloud is nothing short of a superpower.
Take AeroDelft, a student-led team from the Delft University of Technology with one mission: developing the world’s first liquid hydrogen-powered aircraft. AeroDelft is using our Fidelity computational fluid dynamics (CFD) software to simulate and investigate the risk of hydrogen ignition at high release pressures as well as the external dynamics of the aircraft.
With only personal laptops at their disposal, AeroDelft couldn’t hope to run these compute-intensive simulations locally. By running instances of Fidelity CFD in Cadence OnCloud, they were able to access the required compute and capacity through their laptops – with the heavy lifting performed in the cloud. The team is now well on its way to achieving its goal of flying on liquid hydrogen.
EDA in the cloud has another key benefit – the flexibility to collaborate remotely. Silicon chip design once relied on physical proximity of design teams: engineers huddled within whispering distance, collaborating through software and tools running locally on purpose-built systems under enterprise-wide licensing models.
Now, in the wake of a global pandemic that saw millions of workers move to home working, a growing number of today’s chips are being designed in the cloud by globally dispersed workforces, uniting virtually across great distances and on a variety of devices. For example, II-VI, a leading manufacturer specializing in engineered materials, optoelectronic components, and optical systems, onboarded a new system-on-chip (SoC) development team and set up a comprehensive design environment in just two weeks leveraging Cadence Cloud Environment. Read case study
This is no small feat to get right. And as the industry moves towards new technologies such as heterogeneous integration (HI) which rely on efficient co-design, effective collaboration is becoming fundamental.
By shifting chip design workloads to Cadence Cloud, design teams around the world can be given instant access to the latest tools for EDA in the cloud with which to become productive and collaborate effectively, anytime and from any device.
As artificial intelligence (AI) and big data transform the world around us, they’re transforming the way we think about EDA. We call it EDA 2.0, and it’s defined by AI-driven platforms that optimize horizontally across multiple runs of many tools throughout an entire system design program.
The new Cadence Joint Enterprise Data and AI (JedAI) Platform unifies our AI platforms – Verisiumtm AI-Driven Verification Platform for verification and debug, Cadence Cerebrustm Intelligent Chip Explorer for implementation, and Optimalitytm Intelligent System Explorer for system optimization. It opens the door to a new generation of AI-driven design and verification tools that dramatically improve productivity and power, performance, and area (PPA).
The JedAI Platform is currently designed to run on on-prem equipment, and it is also cloud-enabled. By offloading the naturally compute-intensive AI algorithms to advanced high-performance servers in the cloud, companies can free up their on-prem capacity for more traditional EDA workloads. AI and ML workloads in EDA and systems will be powering the next explosion of cloud compute and journey to the cloud.
It’s never been a better time to embrace EDA in the cloud. Increasingly, we’re seeing companies with well-established on-prem equipment not only switching to a hybrid model but to a cloud-first mindset, with all the global collaboration and lightning-fast innovation benefits that brings.
Today, Cadence Cloud has been adopted by 275+ companies in the EDA space leveraging the fully SaaS environment managed by Cadence as well as in a customer-managed cloud environment. Cadence OnCloud SaaS and e-commerce platform for systems design and analysis has been adopted by 1000s of users– leveraging the instantaneous purchase, launch, and use of a range of software products in the systems design and analysis space.
Learn more and get started with Cadence Cloud today.