What is hot/cold aisle?
The hot and cold aisles in the data center are part of an energy-efficient layout for server racks and other computing equipment. The goal of a hot/cold aisle configuration is to manage airflow in a way that conserves energy and lowers cooling costs.
In its simplest form, hot aisle/cold aisle data center design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. The rows composed of rack fronts are called cold aisles. Typically, cold aisles face air conditioner output ducts. The rows the heated exhausts pour into are called hot aisles. Typically, hot aisles face air conditioner return ducts.
A containment system isolates hot aisles and cold aisles from each other and prevents hot and cold air from mixing. Containment systems originally were physical barriers that separated the hot and cold aisles with vinyl plastic sheeting or plexiglass covers. Today, vendors offer plenums and other commercial options that combine containment with variable fan drives to prevent cold air and hot air from mixing.
The importance of using hot/cold aisles
The principal reason for configuring data centers with hot and cold aisles is to manage heating, ventilation and air conditioning (HVAC) systems in the most effective way to conserve energy. Data centers that have not been retrofitted with hot/cold aisles are likely to use more energy.
Considering how energy costs have increased in recent years, it behooves data center managers to consider replacing their legacy rack configurations with hot/cold aisle arrangements. As with any major change to a data center, the capital expenditure must be carefully considered. Data center expansions should be designed with hot and cold aisles as part of an overall green data center strategy.
Best practices for hot/cold aisle containment
Four best practices when implementing a hot and cold aisle containment layout are the following:
- Raise the floor 1.5 feet so air conditioning equipment can push air through that space.
- Deploy high cubic-feet-per-minute rack grills that have airflow output in the range of 600 CFM.
- Place devices with side or top exhausts in their own part of the data center.
- Install automatic doors in the data center.
Legacy data centers vs. hot/cold aisles
Equipment racks in data centers are used to secure servers, communications equipment, power supplies and air-handling equipment. Data centers usually have cooling units that must be strategically positioned for optimum airflow.
Legacy data center layout
A traditional data center configuration is shown in Figure 1. Heat from the fronts of the servers is passed through the cooling system fans as exhaust air. It then travels to the next aisle, where it is heated up further by passing through the next rack, before it is passed into the next aisle and heated further. In such an arrangement, the ambient temperatures of each aisle get slightly warmer, and the cooling system has to work harder to keep the computer room air at the correct temperature.
Hot/cold aisle layout
By contrast, when racks are grouped so that heating and cooling sides alternate, it creates hot and cold aisles as shown in Figure 2. This arrangement generates less heat in the hot aisles, and the computer room air conditioning system works less. In the cool air aisles, air is routed back to cold air returns, so temperatures remain low. This arrangement provides more efficient air handling and cooling.
Migrating to hot/cold aisles
When building a new data center, hot/cold aisles can be part of the design from the start. When upgrading a legacy data center to a modern hot/cold aisle arrangement, the process is more complicated. Designers should do a cost-benefit analysis to determine if the investment will generate sufficient savings and return on investment. Among the factors to consider are the following:
- hiring data center architects and specialized environment and power engineers;
- costs to move equipment racks and reconfigure aisles;
- costs to reconfigure routing of electrical cables and power distribution units;
- HVAC system modifications or replacement;
- system downtime during relocation of equipment and racks; and
- labor costs to perform all of the above.
Learn more about monitoring and controlling temperature in your data center.