How to calculate data center cooling requirements

2022-09-23 22:54:46 By : Mr. Alvin Liu

Environmental effects can severely impact data center equipment. Excessive heat buildup can damage servers, causing them to shut down automatically. Regularly operating them at higher-than-acceptable temperatures shortens their life span and leads to more frequent replacement.

It's not just high temperatures that are a danger. High humidity can lead to condensation, corrosion and contaminants, such as dust gathering on equipment in a data center. Meanwhile, low humidity leads to electrostatic discharges between two objects that damage equipment, too.

A properly calibrated cooling system can prevent these issues and keep your data center at the correct temperature and humidity. It ultimately reduces operational risk from damaged equipment. Here's how your organization can determine what cooling standards the data center needs.

The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) develops and publishes thermal and humidity guidelines for data centers. The latest edition outlines the temperatures and humidity levels at which you can reliably operate a data center based on the equipment classification.

This article is part of

Download this entire guide for FREE now!

In the most recent guidelines, ASHRAE recommends that IT equipment be used with the following:

Determining the proper environment for IT equipment depends on its classification (A1-A4), which is based on the type of equipment it is and how it should run, in descending order of sensitivity. A1 equipment refers to enterprise servers and other storage devices that require the strictest environmental control. A4 applies to PCs, storage products, workstations and volume servers and has the broadest range of allowable temperatures and humidity.

Previous versions focused on reliability and uptime rather than energy costs. As data centers became more aware of energy-saving techniques and efficiency, ASHRAE developed classes that outlined the environmental and energy impact better.

To calculate your data center cooling needs, you need several pieces of data: the total heat output of your equipment, the floor area in square feet (ft2), your facility design and the electrical system power rating.

One thing to remember is that some older equipment might have been designed to older ASHRAE cooling standards. So, if your data center has a mix of equipment, you must figure out an acceptable temperature and humidity range for all the equipment in your facility.

Here's a general calculation you can start with to get a baseline British thermal unit (BTU) cooling size:

(Room square footage x 20) + (IT equipment watt usage x 3.14) + (Active people in the room x 400)

But this is just the start. If you want a more accurate estimate and plan for your facility's future cooling needs, keep reading.

Heat can be expressed using various measures, including BTUs, tons (t) and watts (W). If your equipment uses multiple units, you must convert them to a common format for comparison. Here's a quick conversion chart if your data center uses different ones:

Generally speaking, the power consumed by an IT device is nearly all converted into heat, while the power sent through data lines is negligible. That means the thermal output of the device in watts is equal to its power consumption.

Because some devices generate heat differently than the general rule of "their power consumption equals their heat output," you must calculate them separately:

Now that you've gathered all the data, you simply add them up to determine your total cooling requirements for the data center.

And, if you're using BTUs as your base unit, you must divide your total by 3,412 to determine the total cooling required in kilowatts (kW).

Beyond the special environmental factors mentioned previously, a few other factors can influence a data center's heat output calculations. Ignoring them could lead to an incorrectly sized cooling system and increase your overall cooling investment.

HVAC systems are often designed to control humidity and remove heat. Ideally, they would keep a constant humidity level, yet the air-cooling function often creates substantial condensation and a loss of humidity. So, many data centers use supplemental humidification equipment to make up for this loss, adding more heat.

Large data centers with significant air mixing -- the mixing of hot and cold air from areas inside the facility -- generally need supplemental humidification. The cooling system must help compensate for the movement of the hotter air in the facility. These data centers must oversize their cooling systems by up to 30% because of that.

Condensation isn't always an issue in smaller data centers or wiring closets, so the cooling system might be able to handle humidification on its own through the regular return ducting already in place. The return ducts eliminate the risk of condensation by design so that the HVAC system can operate at 100% cooling capacity.

A data center's cooling needs can change over time, so you should consider oversizing your cooling system for future growth. Oversizing also has the added benefits of being used for redundancy if part of the cooling system fails at some point or if you must take part of it down for maintenance. Generally speaking, HVAC consultants recommend adding as much redundancy as your budget allows or at least one more unit than your calculations say you need.

HVAC consultants typically multiply the heat output of all IT equipment by 1.5 to enable future expansion.

Here are a couple of sample cooling calculations using various standard metrics.

Assume the following sample information for a typical data center:

150 racks with 8 servers each (150 x 8)

UPS with battery power consumption

(1 kW = 3,412 BTU/hr, so 1,755 / 3,412 = 0.5 kW)

2,500 ft2 of windows (2,500 x 60 BTU/hour)

A maximum of 50 employees in the data center at any given time (50 x 100)

Because most HVAC systems are sized in tons, we can use the standard conversion equations (watts x 3.41 = BTU/hour) and (BTU/hour / 12,000 = tons of cooling):

Here's a visual of how that breaks down among components, systems, people and more.

In this example data center, the UPS system generates so little heat, even at maximum use, that it's not even 1% of the total heat output. The rest of the IT equipment generates most of it.

In this example, we look at a small server room, data closet or mini edge data center that might be found in a generic office tower in a large city. These calculations determine your cooling requirements in watts and any power system rating in kilovolt-ampere (kVA) can be roughly considered the same as the total power output of the device.

Same as total IT power load in watts

UPS with battery (5 units at 0.9 kVA power system rating)

+ (0.06 x total IT load power)

Power distribution system with 8.6 kVA

+ (0.02 x total IT load power)

Lighting for a 10 ft x 15 x 10 room

2 x floor area (ft2), or

21.53 x floor area (square meter)

People (150 people total in the facility)

Converting these into tons of cooling required using the standard conversion equations (watts x 3.41 = BTU/hour) and (BTU/hour / 12,000 = tons of cooling), we would need a total of the following:

To determine the future cooling needs of this data closet, we multiply the total IT heat output by 1.5, so 12,036 W x 1.5 = 18,054 W. Adding this new number to the existing ones gives us a future total cooling requirement of 37,017.412 W or 10.5 t of cooling. That's a 20% increase.

As the modern data center changes and evolves from the large, centralized data center of a decade ago to the small, nimble edge computing data center many enterprises are building today, cooling requirements often remain the same. Concentrating that much technology in a single location requires planning an adequate cooling strategy that works for today and into the near future.

Data center cooling requirements are affected by this and more, such as the increased density of the racks, the technology deployments in the facility and the number of staff working there. Having a better understanding of what affects cooling makes any data center professional more knowledgeable about designing the right cooling plan for the organization's needs.

A relatively light patching workload awaits admins this month, but a "wormable" threat should increase the sense of urgency to ...

Use PowerShell automation to build reports in local group memberships on a server and security groups in Active Directory to keep...

With some extra effort in the planning stages, it's possible to control costs in Office 365 and Microsoft 365 with the new ...

With alerts, cost analysis dashboards and other features, Azure cost management tools can help admins more clearly see their ...

Discover the differences between Azure Data Factory and SSIS, two ETL tools. These contrasts include key data management features...

An extension of the Azure DevOps service, Azure Artifacts can help developers manage and share packages to streamline the overall...

On-site NAS and cloud-based NAS are the two main file storage options. Organizations need to weigh the benefits and drawbacks of ...

This step-by-step guide details how to prepare for and perform storage performance tests. Explore methods for choosing the right ...

In this Q&A, IntelliProp CEO John Spiers talks about his vision for CXL and how it will affect both storage performance and the ...

All Rights Reserved, Copyright 2000 - 2022, TechTarget Privacy Policy Cookie Preferences Do Not Sell My Personal Info