Search
Close this search box.

Understanding how a Data Center works

EOLIOS, unique know-how in Europe

Expertise
Continue navigation :
Table of contents
Our latest news :
Our projects :
Our areas of expertise :
Technical files :

Understanding how data center air conditioning works

As the amount of information increases, and work processes become more and more computerized, the question of the security of this information during uninterrupted server operation becomes ever more acute.
A failure in this area can bring a company’s entire business to a standstill, and result in serious losses. One of the most important prerequisites for stable server operation is the maintenance of the optimum air temperature in the server room volume, which is achieved using special systems based on precision air-conditioning systems.

Operating a data center is energy-intensive, and the cooling system often consumes as much (or more) energy as the computers it supports.

In this article, we’ll look at some of the most commonly used data center cooling technologies, as well as new approaches to CFD simulation.

Cold aisle / hot aisle design

This is a data center rack layout, using alternating rows of “cold aisles” and “hot aisles”.

In front of the racks are cold air diffusers (usually via grilles) for the servers to draw in air, then hot aisles evacuate the heat from behind the servers.
The air ducts are usually connected to a false ceiling, which takes the hot air from the “hot aisles” to be cooled, and then discharges the cooled air into the “cold aisles”, via a false floor or ducts (see loose for some designs).

Empty server racks should be filled with blanking panels, to prevent overheating and reduce the amount of cold air wasted.
Indeed, the vacuum created by the absence of servers can lead to parasitic air transfer, given the pressure differences between hot and cold zones.
This parasitic air movement is wasted energy .

CFD simulation of hot and cold aisle temperature distribution - Data Center

Chilled water system

This technology is most commonly used in medium to large-scale data centers.

The air in the data center is supplied by air handling systems, known as computer room air handling systems (CRAH), and chilled water (supplied by a cooling system external to the facility) is used to cool the air temperature.

What's the difference between CRAC and CRAH units?

CRAC units

CRAC units work like home air-conditioning units.
They have a direct expansion system and compressors integrated directly into the unit.
They provide cooling by blowing air over a cooling exchanger filled with refrigerant.
The refrigerant is kept cold by a compressor inside the unit.
Excess heat is then expelled by a mixture of glycol, water or air.
While most CRAC units generally only provide a constant volume and only modulate on/off operation, new models are being developed that allow for variations in airflow.

CRAC units can be positioned in a variety of ways, but are generally installedopposite the hot aisles of a data center.
There, they release cooled air through the perforations in the raised floor (grids, or floor slab perforations), cooling the computer servers.

CRAH units

CRAH units function like chilled-water air-handling units installed in most office buildings.
They provide cooling by blowing air over a cooling exchanger filled with chilled water.
Chilled water is usually supplied by “water chillers” – otherwise known as chilled water plants.
CRAH units can regulate fan speed to maintain a set static pressure, ensuring that humidity levels and temperature remain stable.

Chilled water can be produced by direct expansion or by the much more energy-efficient adiabatic DRY cooling system.

What is the optimum temperature for a data center?

Server rooms and data centers contain a mixture of hot and cold air – server fans expel hot air during operation, while air conditioning and other cooling systems bring in cool air to counteract any hot exhaust air.
Maintaining the right balance between hot and cold air has always been paramount to keeping data centers up and running.
If a data center becomes too hot, the equipment runs a higher risk of failure.
This failure often results in downtime, data loss and loss of revenue.

In the 2000s, the recommended temperature range for the datacenter was 20 to 24°C.
This was the range recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) as optimal for maximum equipment availability and service life.
This range enabled better utilization and provided sufficient buffer space in the event of air-conditioning failure .

Since 2005, new standards and better equipment have become available, as have improved tolerances for higher temperature ranges.
ASHRAE has in fact now recommended an acceptable operating temperature range of 18° to 27°C.

Rising server input temperatures also make the use of free cooling or free chilling (systems that use outside air to blow fresh air into the room, or chill water instead of the chiller) much more attractive, especially in temperate regions such as France.
In fact, with room temperature set at 25°C instead of 15°C, the periods of the year during which free cooling can be used without activating the air-conditioning system are considerably longer. This generates significant energy savings and an improvement in PUE (Power Usage Effectiveness).
The same applies to free chilling, which can be used more frequently throughout the year to cool water loops, with temperature setpoints now fixed at 15°C instead of 7°C for water.

What are the problems associated with too high a setpoint temperature in a data center?

Unfortunately, higher operating temperatures can reduce response time in the event of a rapid rise in temperature due to a cooling unit failure.
A data center containing servers operating at higher temperatures risks instantaneous simultaneous hardware failures.
Recent ASHRAE regulations underline the importance of proactive monitoring of environmental temperatures inside server rooms.

What happens if it gets too hot?

When the temperature inside the data center rises too high, the equipment can easily overheat.
This can damage servers.
Data could be lost, causing major problems for companies relying on data center services.
That’s why all data centers need cooling systems that can cope with a crisis or maintenance period.

What happens if the air conditioning systems break down?

Depending on the power density installed, the rise in air temperatures inside the server room can be extremely rapid.
In simulated power failures, we generally observe a rise of around 1°C per minute. The result is a significant risk of hardware degradation and data loss if redundancy and security systems are not correctly dimensioned.
On the other hand, the time it takes for air-conditioning compressors to restart and activate at full power is an issue for the most demanding halls.
To delay the effects of rising temperatures, inertia systems can store heat energy for a few minutes, smoothing out the temperature rise curve.

Play Video

Why run a CFD simulation of a data center?

CFD simulation provides information on the relationship between the operation of mechanical systems and variations in the thermal load of IT equipment.
With this information, IT and site personnel can optimize airflow efficiency and maximize cooling capacity.

Studying hot spots in a data center

Data centers: on the same subject

Thermal study of technical premises

EOLIOS offers thermal optimization solutions specially designed for data center technical rooms.

Find out more

Data Center Fire Simulation

Comprehensive analyses, risk identification and solutions to protect data centers

Find out more

External CFD simulation for data centers

Heat rejection systems share space with generators, and EOLIOS illustrates how to ensure their proper operation.

Find out more