Never miss a story from Data Center. Subscribe for in-depth analysis and articles.
The increase in data from new applications, such as Machine Learning (ML) and Artificial Intelligence (AI), requires more powerful systems. These systems present a complex thermal challenge for the data center because operators must account for existing density requirements while also anticipating the impact of these new, high-performance systems.
We brought together a panel of thermal and operational experts to discuss how these challenges are impacting the data center industry and captured their discussion in an eBook. Continue reading this blog for some of their high-level takeaways, or download the full eBook here.
It’s clear that high-performance applications require new levels of support from digital infrastructure. Even though this concept is well understood, actually implementing these applications in a data center can be much more difficult, particularly in existing legacy data centers. Data center operators are already tasked with figuring out how best to maximize data center performance while catering to existing density requirements. Now, operators must be able to anticipate the “where and when” of changing IT and what this will mean for their potential future workflow.
When it comes to this type of implementation, data centers must be capable of hosting varying thermal loads, including high-density equipment. Many operators are considering new cooling technology to make this happen. Despite such efficiencies, implementing new cooling technologies, such as liquid-to-air, liquid-to-liquid, or liquid-to-refrigerant, does add another layer of complexity to data center design.
Many of the participants in the panel noted that computational fluid dynamics (CFD) simulation has helped them overcome the complexities that changing technology presents in the data center industry. The panel participants found this to be the case for both data center design and operations. This is significant because—while the mental connection between physics-based simulation and design is readily available to almost everyone in the data center industry—many do not extend that same line of thinking to operations.
Data center operators (legacy in particular) must justify costs and balance space and resource utilization to maintain their relevance to their business counterpart. Understanding how each piece of IT will impact the performance of the entire data center helps operators run data centers effectively and within the financial bounds that the business sets.
One of the most important takeaways from the panel is that a data center’s design is always changing over the data center’s lifespan because operators are continuously (re)designing with each new piece of IT they introduce into the data center layout. The key to maximizing performance is to figure out how to create a data center flexible enough to handle this type of continuous change by implementing a process that efficiently evaluates each change. They discussed this concept, among others, in more detail in the full eBook here.