Home
  • Products
  • Solutions
  • Support
  • Company

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  • Products
  • Solutions
  • Support
  • Company
Community Blogs Data Center > How to Address the Changing Cooling Needs of Data Cente…
Veena Parthan
Veena Parthan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials
CFD
data center
data center cooling
digital twin
Vertiv
Cadence Reality DC

How to Address the Changing Cooling Needs of Data Centers

16 Oct 2024 • 6 minute read

 This blog is an excerpt from the CadenceLIVE Silicon Valley presentation “Using Digital Twins to Optimize the Implementation of AI Compute Clusters” by Steve Blackwell, VP of Engineering at Vertiv Corporation. Steve is recognized as a seasoned technology executive with extensive experience in innovation and product development, with a particular emphasis on technologies related to data center management. His expertise has earned him 30 U.S. patents across various areas, such as data center architecture, video processing, and data communications. Watch the on-demand webinar to learn more about how to optimize data center design for AI compute clusters.

In today's fast-paced technology environment, data centers are the foundational infrastructure enabling the significant growth of industries such as artificial intelligence (AI) and cloud computing. As these areas continue to develop, the demand for more efficient data centers to manage higher densities is increasing more than ever. The remarkable growth of AI is propelled by substantial investments in the hardware segment, highlighting its importance in the broader expansion narrative. These investments are sparking innovations in data center architectures and capacities, introducing both new challenges and opportunities.

 

Picture Credits: Bloomberg

This blog explores the innovative cooling techniques and architectural improvements necessary for future data center operations. It incorporates perspectives from Steve Blackwell, VP of Engineering at Vertiv Corporation, to provide a comprehensive view of the advancements needed to meet the evolving demands.

About Vertiv

Vertiv, previously recognized as Liebert Corporation, embarked on its journey in the 1960s. Having evolved through its time under Emerson Electric, Vertiv has spun out as a standalone entity. Specializing in data center critical infrastructure technologies, Vertiv excels in providing top-tier power and cooling solutions that are particularly relevant in today's rapidly advancing AI sector. However, their partnerships with industry leaders such as Cadence and NVIDIA highlight their significant impact on the AI and data center sectors.

Current Data Center Scenario

Traditionally, data centers were built to accommodate racks that are considered low-density by today's standards, generally in the single-digit kilowatt range per rack. However, the landscape is shifting. Currently, over 75% of installed racks operate below 20 kilowatts (kW) per rack, but AI, machine learning, and cloud computing are pushing these numbers higher. The infrastructure in place today, and certainly that of the past, isn't equipped for the industry's present or future demands.

Picture Credits: Uptime Institute

A survey conducted by OMDIA has projected a potential increase in rack densities by 3 to 5 times the current figures over the next five years. For instance, a small data center, which three years ago considered itself to be at the forefront of the industry with a data hall designed for up to 8kW per rack, is now planning its next data hall for up to 150kW per rack. This shows the significant shift in density planning within a relatively short timeframe. While not all data centers will reach these levels, the existence of systems designed for such capacities indicates a notable industry trend.

Expected Increase in Rack Density Growth (Picture Credits: Omdia)

Evolution of Data Center Cooling  

Historically, managing data center cooling was relatively straightforward. The approach involved surrounding the servers with large air conditioning units, pushing excess cold air up from beneath the floor, and circulating the warm air back, often through the ceiling plenum. This airflow cycle was sufficient for what we now refer to as low-density systems.

However, as the power demand increased, more sophisticated methods like hot aisle or cold aisle containment and pressure-controlled cooling fans were adopted to enhance cooling efficiency. Yet, these methods reach a limit, especially with the advent of high-performance computing clusters and AI clusters. The challenge here is that air, as a coolant, has limitations due to its poor heat transfer properties.

How to Address the Changing Cooling Demands of Data Centers

It is important to explore different liquid cooling methods to meet the evolving cooling requirements of data centers. One approach is installing heat exchangers on racks' rear doors to help dissipate heat. A more advanced option is direct-to-chip cooling or immersion cooling. Currently, most liquid cooling solutions use a single-phase process, where the liquid is heated. However, some newer technologies utilize a two-phase process, converting the liquid to gas directly at the chip, enabling more effective heat removal.

To choose the best cooling solution, it is important to fully understand the current capabilities and potential limitations of the data center. For example, the Vertiv team analyzed a hypothetical data center with 63 racks, which had low to medium density, ranging from 3 to 20kW per server. They also considered adding four AI racks, each with a modest heat output of 30kW. The initial simulation showed that the existing perimeter cooling setup was insufficient, especially when the AI racks started to overheat, impacting the entire system due to the interconnected nature of perimeter cooling.

Data Center with Newly Added AI Racks

In response, the team experimented with rear door heat exchangers, which managed to dissipate about half of the excess heat from the AI racks. The situation improved, with critical hot zones being eliminated, although some areas still exceed the target temperatures. Finally, the team explored direct-to-chip liquid cooling solutions, which are known to remove up to 80% of a server's heat. This approach significantly reduced the thermal load, demonstrating a more effective method for managing high-density systems and maintaining optimal operating conditions.

Four Cases Simulated Using Cadence Reality DC: (i) Base Case:  Existing data Center with Perimeter Cooling; (ii) AI Rack Solution 1: Perimeter Cooling/Containment Only; (iii) AI Rack Solution 2: Active Rear-Door Heat Exchangers; (iv) AI Rack Solution 3: Direct-to-Chip Liquid Cooling

It is important to remember that liquid cooling alone is not the panacea for all high-density cooling. Most liquid cooling solutions remove a proportion of the heat, still requiring air cooling for the remainder. 20% of 120kW a rack is still a significant air-cooling challenge. Some liquid cooling solutions remove heat from the chips but expel this heat into the room from a heat exchanger in a neighboring rack. Other solutions need careful management of coolant distribution to the hardware and careful consideration of normal operation and failure scenarios. Once the heat has been transported outside the data hall, the challenge of expelling this to the local environment, or hopefully for use in heat capture and reuse, is still significant.

This highlights the need for a comprehensive cooling approach and, in all cases, simulation to understand the impact of liquid cooling on existing facilities or new builds alike. Experience demonstrates that even minimal direct-to-chip cooling is adequate for the current setup. However, in scenarios with denser servers, potentially reaching 50kW or 100kW per rack, a combination of cooling methods, including direct-to-chip, active rear door, and air cooling, becomes essential for effectively managing various systems.

When considering upgrades or the implementation of new technologies in existing data centers, it is important to thoroughly analyze the current setup and the potential impacts of these additions.

As systems become more complex, thorough planning and simulation are essential. In the past, data centers primarily focused on the white space, often neglecting auxiliary spaces like mechanical rooms. However, with the rise of high-density systems, the power and cooling infrastructure may take up as much space as the data center itself, requiring careful planning and consideration. Valuable insights can be gained by partnering with experts and using simulation tools such as those offered by Cadence. Vertiv has successfully utilized the Cadence Reality DC portfolio in their designs and customer projects for years, leading to successful outcomes.


Try Cadence Reality Digital Twin Platform to elevate your data center project to the next level!


CDNS - RequestDemo

Have a question? Need more information?

Contact Us

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information