ASHRAE standard 90.4 updates emphasize green energy

New addenda to ASHRAE 90.4 simplify mechanical load component calculations and modify the definition of UPS, among other changes. Use the updates to guide green data center design.

ASHRAE has set standards and best practices for data center design for many years and continues to move the industry forward. ASHRAE standards are published on three-year cycles, with the next editions of Standards 90.4 and 90.1 coming in 2023.

Three ASHRAE groups are of particular importance to the Data Center industry:

  • Technical Committee (TC) 9.9, Mission Critical Facilities, Data Centers, Technology Spaces and Electronic Equipment.
  • Standing Standard Project Committee (SSPC) 90.4, which publishes ANSI/ASHRAE Standard 90.4, Energy Standard for Data Centers.
  • SSPC 90.1, which publishes ANSI/ASHRAE/IES Standard 90.1, Energy Standard for Sites and Buildings except Low-Rise Residential Buildings.

Standard 90.4 will incorporate six important addenda that have completed public review.

Standard 90.4 vs. 90.1

Standard 90.4 is a "sister standard" to Standard 90.1, which continually pushes the design and construction industry to more energy efficient practices. Standard 90.4 is referenced in Standard 90.1 as the preferred method to demonstrate that organizations use energy-efficient methods and equipment in the data center design stage. It will include updated addenda as described below in the 2023 publication.

Because virtually every U.S. jurisdiction adopts Standard 90.1 as code, as do many other countries, 90.4 also becomes code wherever 90.1 is recognized. Some states have also independently adopted 90.4 and it is further referenced in the 2021 version of the International Energy Conservation Code (IECC).

Addenda changes in ASHRAE standard 90.4

ASHRAE Standard 90.4 updates in 2022 include a variety of changes including updates to energy use calculations and compliance with shared systems outside the data center.

Addendum "a" encourages heat recovery within data centers and allows a credit under specific circumstances. It also provides equations to calculate the annualized mechanical load component (MLC).

Some states have also independently adopted 90.4 and it is further referenced in the 2021 version of the International Energy Conservation Code (IECC).

Addendum "b" clarifies requirements in Sections 6 (Mechanical) and 11 (Tradeoffs), where organizations take credits for renewable energy systems.

Addendum "d" modifies the uninterruptible power supply (UPS) definition to include diesel rotary UPS and provides the method of accounting for those systems in calculating the electrical loss component (ELC).

Addendum "e" clarifies how to achieve compliance with Standard 90.4 when shared systems, such as central chiller plants, serve both a data center and the rest of a building.

Addendum "f" does three things.

  • Requires more efficient UPS systems because manufacturers have flattened the efficiency curves, improving mainly the lower load ranges where redundant systems generally operate.
  • Requires examination of distribution transformer curves at the same four load points required for UPSes, because federal EPA standards rate transformers at only one operating point, which is not meaningful for data center power distribution units.
  • It eliminates the incoming service segment from the ELC calculation, recognizing that it is a minor factor in data center efficiency and has developed too many variations for realistic calculation.

An additional Addendum "g" could not complete public review in time for incorporation into the 2023 publication but is expected to be published separately later in 2023. It should bring the MLC calculation more in line with the segmented ELC approach, simplifying tradeoffs among the mechanical system components.

It should also simplify compliance when only partial upgrades are made, but Section 11 of the Standard (Tradeoffs) does not apply.

However, Addendum "g" will also require the inclusion of process heat and ventilation energy in the MLC calculation to encompass standby generator heaters, cabinet door cooler fans and auxiliary pumps for liquid cooling systems. It will also require the inclusion of energy to raise the temperature of a liquid or air stream needed for humidity control or to prevent condensation on windows.

Currently, only heat from UPS losses is included in the MLC calculation when fans or pumps are on UPS. Higher MLC values are proposed to accommodate these additional energy consumption devices, along with changes to anticipate increased use of liquid cooled IT systems. The definition for area, as used in the watts per square foot determination, will also now match the definition in Standard 90.1.

Updates to Thermal Guidelines for Data Processing Environments

TC 9.9's 2004 publication of the Thermal Guidelines for Data Processing Environments was arguably the most significant advancement in the industry since the concept. The fifth edition of Thermal Guidelines, now one of the 14 books in the TC 9.9 "Datacom" series, makes three important changes.

  • Air-cooled humidity limits now differentiate between corrosive and noncorrosive conditions. With high gaseous contaminants, relative humidity is limited to 50% but can rise to 70% in noncorrosive environments. The low limit remains at 8% RH, based on 2005 ASHRAE research showing that low humidity will not expose IT equipment to damaging static discharge in properly grounded rooms. Although corrosion is related to relative humidity, TC 9.9 continues to recommend that level of data center or dew point temperature.
  • The fifth edition adds Class "H-1" for very high-density situations and limits inlet air temperature to only 25oC (77oF) to maintain chip junction temperatures.
  • Class names have been changed. Classes W17, W27 and W32, and new classes W40, W45 and W+ relate to entering water temperature limits of 17oC, 27oC, 32oC, 40oC, 45oC and higher than 45oC (52.6oF, 80.6oF, 89.6oF, 104oF, 113oF and 113oF+). The edition also provides guidance regarding liquid cooling pressures.

Dig Deeper on Data center hardware and strategy

Cloud Computing