Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Blogs Breakfast Bytes Enterprise Datacenters Only Use 56% of Their Capacity

Author

Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
CFD
featured
Computational Fluid Dynamics
6SigmaDCX
hyperscaler
thermal
datacenter

Enterprise Datacenters Only Use 56% of Their Capacity

16 Sep 2022 • 7 minute read

 breakfast bytes logofuture facilities logo digital twinI wrote about Future Facilities on the day we acquired them. See my post Cadence Acquires Future Facilities, a Pioneer in Datacenter Digital Twins.

I was recently on a Zoom call with Hassan Moezzi, until recently CEO of Future Facilities, now presumably Vice-President of something at Cadence. I asked him about how Future Facilities presents itself to potential customers. He told me what seemed an unbelievable statistic. According to the 451 Global Digital Infrastructure Alliance:

Enterprise datacenters only use 56% of their capacity

That just struck me as something with enormous financial impact, $100 bills lying on the ground waiting to be picked up. An average datacenter is, say, 100,000 square feet at $1,000 per square foot. So for every three datacenters built in the world, with better utilization only two are really required, saving (gulp) $100 million.

There are three aspects to putting equipment into a datacenter: physical, electrical power, and thermal/cooling.

  • Physical is generally not a big issue since everyone knows how big every box is, and whether there is empty space. Of course, there is a "Tetris" problem that the blocks don't always fit together perfectly, but in a big datacenter that is fairly minimal.
  • Electrical power is generally not a big issue since everyone also knows how much power provisioning is required for each rack/box, and it is fairly easy to measure. That works at both the small scale ("how much power is this rack using?) and the big ("how much power is the whole datacenter using?")
  • Thermal is the big challenge, and the reason datacenters don't get close to their capacity is due to the perceived risk of adding more equipment which might cause thermal/cooling issues, and result in failure of individual units or dramatic failures where a failure of, say, a cooler causes incremental failures that cascade.

On the first two points, it is straightforward to determine whether additional equipment can be added to a datacenter without running out of space or power. But the third one is a problem. At the thermal level, predicting in advance whether there will be a problem is harder. The electrical power to provision a box translates into some amount of heat that has to be addressed. But, typically the electrical power requirement is higher than even the maximum power ever used, let alone anything closer to average. IT managers are risk-averse and so the solution of putting the equipment into the datacenter and discovering whether or not there is a thermal/heating problem is not really an option. That's the equivalent in the EDA world of taping out the chip to avoid having to do more verification. Airflow challenges are made worse by the fact that air is invisible, and heat even more invisible.

Let's make an example more concrete. Assume you want to add a new rack full of Dell compute servers with a top of rack router. The rack will affect airflow in the entire datacenter by blocking some airflow. It will also create heat. The big question for the IT staff is whether or not they can guarantee that the Dell server boards and the router will have enough air at a low enough temperature to meet their specification, and guarantee that the extra heat won't adversely affect any of the other equipment already in the datacenter.

In traditional Cadence EDA, it is often the case that you have a choice between excessive pessimism or accuracy. It is the same here. Either you use handwaving arguments (or more likely Excel spreadsheets) to convince yourself you have a lot of cooling/thermal margin. Or you need to do the modeling accurately.

Future Facilities

It is at this point that Future Facilities comes in with its datacenter digital twins. 6SigmaDCX, its datacenter product, can analyze the implication of building out a datacenter, or of making changes to a datacenter over time. There are two parts to the technology. First, models of pretty much anything that you might want to put into a datacenter, from individual server boards, network switches, chillers, and so forth. Then there is analysis of the thermal movement within the datacenter, which is mostly from air movement but can also be from liquid-cooled equipment. Of course a rack of equipment can affect both aspects: creating heat inside the rack, and also blocking the flow of air through the datacenter, or changing it due to intakes and exhausts. But using 6SigmanDCX allows you to give an accurate answer to the Dell server rack example that I posed above. The Excel spreadsheets may not be convincing enough to let the datacenter to manager allow the rack into the building, hence the 56% number that we started with. But 6SigmaDCX will often show that there is plenty of thermal and airflow headroom, and so pushing that 56% number up. I'm not naive enough to think that pushing up the efficiency of a $100M datacenter is worth 1% per percent, but I do know it is not nothing, or even close.

datacenter lifecycle

Hassan showed me the datacenter life cycle. It is designed. During the design process changes are made. Then, once the datacenter is built and in service, further changes are made, as in the above "lollipop" diagram.

datacenter theory and practiceThe result of all this change is shown in the above image. On the left is the theoretical design of the datacenter. Four rows of ten racks. On the right, how the datacenter ends up after getting populated with real equipment and adapting to various changes over its lifetime. You can also see the level of modeling that 6SigmaDCX handles: the raised floor, the two big chillers,various different rack-scale electronic systems.

Using a digital twin, the lifecycle of the datacenter can be modeled, with infrastructure changing slowly, cabinet level changes rare, and equipment changes such as adding a new board to a rack very common. This is indicated by the speed the bubbles move in the animation, blue using power and cooling libraries (things like chillers) moving slowly, green using cabinet libraries and moving a bit faster, and orange the actual equipment libraries moving fastest of all.

So using 6SigmaDCX allows you to get three datacenters for the price of two...three datacenters at 56% capacity contain the same amount of equipment as two datacenters at 84% capacity.

There Is Not Enough Power

I'll repeat a paragraph from Bloomberg that I quoted in my original post about Future Facilities:

When Google wanted to build a new $1.1 billion data center in the Luxembourg countryside, the government championed the investment and helped the company to acquire the land. Authorities in the Netherlands granted Meta Platforms Inc. permission for what promised to be an even bigger one, part of the country’s ambition to become Europe’s “digital hub.” The political metrics are now changing for the giant facilities. The two projects were paused after grassroots resistance from locals and environmental activists. But when the focus is on ensuring the lights stay on this winter, data computing and storage that can guzzle a small town’s worth of power are no longer as in vogue for some European governments.

So requiring three datacenters everywhere that only two would suffice is not just wasting a lot of money. It may be the difference between being able and unable to get the compute (and network, storage, and whatever) that you require. It's an exaggeration to say that you can't build any more datacenters, but in many parts of the world, there is simply not enough electrical power. Even California, the richest state in the richest country in the world, has warnings as I type this blog post about possible rolling blackouts since the temperature is hot, but not hotter than it usually gets for a few days each summer. I don't think California has the electrical capacity for many more datacenters (and probably not for all those electric vehicles either, but that's a topic for another day).

As if to rub the point home, my first Zoom call with Hassan had to be rescheduled since there was a blackout that hit his California house a couple of minutes before our call was due to start. We rescheduled it for the following week when he was in London...which seemed to have power!

Learn More

See the Cadence Future Facilities website.

Economist Joke

Talking of $100 bills lying on the ground:

Two economists are waking down the street and see a $100 on the ground. One economist says to the other, an efficent market hypothesis (EMH) proponent: "it can't be real or someone would have picked it up already".

If you don't know what EMH is (you should), this is not the place to explain it. But if you do, you'll get the joke.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information