BLOG

Avoid data centre heatstroke this summer

What is the Proper Temperature for a Data Center?

As a data center manager or supervisor, determining the proper data center temperature may be difficult. However, data centers are critical to ensuring that confidential data is kept safe and secure. Therefore, servers in the data center must be kept operational, no matter what happens outside of it.

It’s no secret that these servers produce heat while in operation. If no one is watching the heat given off, it may cause a significant service outage and problems. That is why you must know what the room temperature should be and how to maintain it. Please take a look at some of the issues data centers confront when temperatures rise and what you, as the supervisor, can do to alleviate them.

What Happens if it’s Too Hot?

When the data center’s temperature rises too high, the equipment may rapidly overheat. This can damage the servers and other equipment in the room. You may also have to deal with data that has been deleted from the servers. As a result, your actions are wreaking havoc on a business solution that relies on you. This is why all data centers need cooling systems. It’s also critical to understand the ideal temperature for the data center so that it does not overheat.

At what temperature do servers fail?

According to most industry recommendations, the optimal data centre temperatures should not exceed 82 degrees Fahrenheit. This is the highest it should ever reach. A healthy range of temperatures in the server and data rooms, on the other hand, can be anything from 68 to 71 degrees Fahrenheit. Some organizations have claimed around 80 degrees. This helps to minimize cooling equipment and energy consumption costs.

An Updated Look at Recommended Data Center Temperature and Humidity

Over the years, that post has been one of our most popular pages and has assisted thousands of customers in creating, monitoring, and maintaining suitable data center environmental conditions.

We’ve decided to change our recommendations in light of the new and revised ranges to keep you informed on your data centers’ temperature and humidity requirements. IoT devices and sensors connected to your organization’s networks can also help you keep track of these circumstances to guarantee that your most critical resources and assets are always safeguarded. Gartner Research predicts that the cost of a single minute of downtime is about $5,600 per minute – or $336,000 an hour. Don’t allow temperature, humidity, or other environmental variables to cost your firm thousands of dollars in lost income!

Recommended Computer Room Temperature

The air inside server rooms and data centers is a mix of hot and cold air – server fans expel heated air while running, but air conditioning systems and other cooling mechanisms supply cool air to counteract all the heat. Since day one, maintaining the appropriate balance of hot and cold air has been critical for data center uptime. Equipment is more likely to fail if a data center becomes too hot. Equipment failure frequently causes downtime, data loss, and lost money.

Recommended Computer Room Humidity

The amount of moisture in the air at a given temperature compared to the maximum amount of water the air could contain at that same temperature.” Therefore, a data center or computer room is ideal for keeping ambient relative humidity levels between 45 percent and 55 percent for optimal performance and durability.”

The newer 2016 SIA/IPA guidelines have not substantially changed the standards. The older 2014 data center humidity standards are maintained, with a target of 50% humidity. Minimum humidity is 20%, while maximum humidity is 80%. Ambient cooling produces humidity in a data center’s air; it’s critical to maintain the humidity at acceptable levels. When the humidity level is too low, static electricity builds up, which can cause damage to essential server components. In addition, overly moist conditions can lead to condensation, which can cause hardware corrosion and equipment failure.

The same humidity-alert recommendations that we shared in 2005 are still valid. Critical warnings should be issued when relative humidity reaches 40% or 60%. Relative humidity levels of 30% and 70 percent should trigger important notifications.

What You Need To Know About Data Center Cooling

It’s critical to meet specific standards to ensure that a facility runs safely and effectively during the selection process. In addition, data centers are energy-intensive due to their highly advanced infrastructure. Therefore, understanding a data center’s cooling architecture might help organizations decide whether they can safeguard and maintain their IT hardware.

Data Center Cooling Efficiency

Separating hot air from cold air is known as data center cooling. Unnecessary heat increases because of the data center’s hot exhaust air circulation. When equipment overheats and malfunctions, it can be due to too much warm air circulating in a data center. Facilities that can do this effectively, economically, and with regular monitoring in place are on the right track to cooling efficiency.

A good rule of thumb for data center cooling efficiency is to aim for a lower volume of air to be cooled through directed cooling. Each rack or cabinet in a data center must be contained in its system, known as controlled data center cooling. Shelves are generally positioned to avoid drawing hot exhair from surrounding servers after being shut. Cable and hardware monitoring ensures that all items are correctly kept, resulting in less blocked airflow.

Cost-effective Energy Consumption

Data centers are energy-intensive and costly to build and run. In addition, the amount of energy consumed by a cooling system is unpredictable, depending on the data center’s type of cooling system.

Design and management of the facility can affect its energy use and cost-effectiveness. If a facility can cut down on and optimize the CRAC units in use for cooling, it should see improved energy efficiency. The requirements of the facility limit the utilization of this technique. A data center that runs hotter will need less cooling and fewer units. A portion of the servers may not be required in other data centers, and they can be turned off, lowering energy and maintenance expenses overall.

Maintenance & Monitoring

Regardless of the data center cooling system that a company employs, it’s often a standard component of the equation when it comes to upkeep and monitoring. For example, monitoring and maintaining the cooling system in the server room may help the IT staff determine whether their present cooling system is adequate. Additionally, maintenance and monitoring may provide security to clients concerned about their equipment’s climate readings.

Proper monitoring and maintenance are preventative measures for any cooling hazards that may emerge in the data center. Managers may also examine whether their cooling systems need to be upgraded by utilizing this feature. When it comes to maintaining a healthy environment, regular troubleshooting allows the IT department to keep track of temperature and humidity while making necessary modifications to safeguard consumers’ critical data. In addition, the company must continue to invest in new cooling technologies to maintain equipment and improve performance.

Current Cooling Systems & Methods 

One of the main reasons organizations move their on-premises data centers to colocation is high cooling infrastructure expenses. Unfortunately, most private data centers and telco closets are highly inefficient when it comes to cooling IT infrastructure. Another disadvantage of virtualized infrastructure is that it doesn’t allow organizations to monitor their applications on a real-time basis, making it increasingly challenging to optimize IT infrastructure to reduce cooling needs entirely.

Calibrated Vectored Cooling (CVC)

CMC is a specialized kind of data center cooling equipment designed for high-density servers. The cooling solution’s tandem fans can be configured to flow in either direction with a switch, allowing the system to direct excessive heat away from the integrated circuit boards.

Chilled Water System

Chilled water is a typical data center cooling system utilized in mid-to-large-sized data centers with heated water to cool incoming air from air handlers (CRAHs). A chiller plant is used to supply water.

Cold Aisle/Hot Aisle Containment

Cold and hot aisles are frequently used in data center server rack layouts, containing alternating rows of “cold aisles” and “hot aisles.” Cold aisles have cold air intakes at the front of the racks, while hot aisles include air exhausts at the back. Hot galleries keep the cold aisles cool, which remove warm air from the air conditioning intakes and vent it into the cold halls. To prevent overheating, empty racks are filled with blanking panels.

Computer Room Air Conditioner (CRAC)

CRAC units are mechanical cooling and condensing systems that resemble conventional air conditioners driven by a compressor that draws air from a refrigerant-filled cooling unit. Even though they are highly inefficient in energy usage, the equipment is rather inexpensive.

Computer Room Air Handler (CRAH)

A CRAH unit is part of a more significant infrastructure that includes a chilled water plant (or chiller) somewhere within the complex. The chiller is chilled water that goes through a cooling coil within the machine and then draws air with modulating fans. Because they use external air to chill, CRAH units are far more efficient in colder year-round climates.

Critical Cooling Load

This value indicates the total functional heating capacity (usually measured in watts of power) on the data center floor for cooling servers.

Evaporative Cooling

It can extract heat from the air by exposing hot air to water, which evaporates and absorbs the heat from the atmosphere. The water may be discharged through a satisfactory misting method or a damp material such as a filter or mat. Although this method is highly energy-efficient since it doesn’t use CRAC or CRAH units, it will require a significant quantity of water. Datacenter cooling towers are often used to assist evaporations and transfer extra heat to the external environment.

Free Cooling

A data center cooling system uses the outside air to provide cooler air to the servers instead of constantly chilling the same mindset. This is a very energy-efficient way of cooling a server in certain regions.

Raised Floor

A raised floor elevates the data center floor above the building’s concrete slab. The gap between the two is utilized for water-cooling pipes or extra airflow. In addition, because power and network cables are sometimes placed through this area, new data center cooling architecture and best practices recommend putting these wires above the ceiling.

Future Cooling Systems & Technologies

Although air cooling technology has evolved significantly over time, fundamental issues are still hampered. For example, air conditioning systems consume many data center areas and require significant amounts of energy. They’ll also cause moisture to seep into air-sealed environments and are notorious for equipment failures.

Until recently, data centers had no other cooling options than hiring third-party providers. However, with so many new liquid cooling technologies and methods accessible, data center colocation facilities are beginning to test new approaches for overcoming their cooling difficulties.

Liquid Cooling Technologies

The newest generation of liquid cooling systems is more efficient and effective in cooling than previous versions. Unlike air cooling, a fluid cooling system that uses a lot of power and creates pollutants and condensation into the data center is cleaner, more scalable, and highly targeted. The two most popular liquid cooling systems are full immersed cooling and direct-to-chip cooling.

Immersion Cooling

Immersion systems entail immersing the hardware in a non-conductive, non-flammable dielectric liquid container. The case is leak-proof and holds both the fluid and the hardware. The dielectric fluid absorbs heat far more effectively than air, with heated water turning to vapor and condensing as it falls back into the liquid to assist in cooling.

Direct to Chip Cooling

Direct-to-chip cooling uses tubes that deliver liquid coolant directly to a cold plate, which is then placed on top of the motherboard’s chips and draws heat away. The heat is then carried over to a chilled-water loop and sent back to the cooling plant, where it will be discharged into the outside environment. Both procedures result in far more efficient data center cooling systems.

Read more