21st century data centers are a multi-dimensional mix of movable-immovable systems, and people. While deploying various components and systems, every minutest detail must be precisely planned, tested, and orchestrated to sustain the requirements of today’s organisations and future technologies.
At the center of it all amongst the servers, storage devices, and interlinked data connectivity infrastructure is the cooling technology. Developers and operators of data centers have spent huge amounts of time and resources into installing both air and liquid-cooled technologies to enhance energy productivity and keep the large-scale deployments working smoothly.
Yet, these systems have their limits. The average container density among the cluster of networks has increased by almost a kilowatt in less than one year. This is due to the rise in the use of AI, ML and other compute-heavy equipment. An efficient data center layout can sustain an increase in rack density. This in return can clamp down overall build costs by reducing the total space required for deployments. And, while it is a given for designers to deploy and compute multiple functional dynamics and models/analysis to prove the heat management is sufficient, there are four concerns that even high-end analysis won’t show.
Rising Heat Risks
In many high-end facilities, uninterrupted power supply with battery backup can prevent power fluctuations from dropping load and harming internal hardware while maintaining power so that the emergency generator can turn on. This mechanism does protect IT equipment, but leaves out core mechanical systems because of cost and budgetary constraints.
There are several reasons for generators to be started, especially during testing, maintenance and repairs. These occurrences are precisely the reason unused units are provided in parallel maintainable facilities.
Irrespective of the situation, backup generators can take up to a few seconds to appear online and restore power. While this may seem pretty normal in traditional data centers, it can have devastating consequences in high-density environments and setups. Even after deploying highly calculated models, preventive measures and various dynamics it is unlikely that the analysis won’t show the below risks arising from heat.
Thermal risk:
Heat generated from high- performing data centers can be extreme. Coolants and additional cooling systems need to have enough space to recover amidst room temperatures post an outage. Much like a runner, if he starts poorly on the starting line then he never makes up to the gold medal winner. Creating a cooling system not capable to reduce the thermal temperatures will create problems in high-density deployments. Data centers with a short-sighted view on protecting just the IT with a UPS, demonstrates incomplete understanding of the limitations of their mechanical systems. Relying solely on functional analysis for design insights leaves data centers open to greater risk of business interruption caused by excessive heat build-up and high thermal radiation and heat.
This happens because the IT load protected by only a UPS does not stop producing heat while waiting for power to be restored to the facility. Without any solution to remove it from that area, it will continue to build up and overheat the mechanical systems to a point of no return. And it can happen in a matter of seconds.
Tapping into the wasted energy:
Sometimes the reasons may not be overheating but the reason can be over cooling or wastage of a lot of energy while managing the hot spots in the containers. This happens when a targeted approach to cooling the units is not undertaken. To mitigate these high-density racks, advanced delivery methods are required. The problem with a lot of these data centers is they flood the units with coolants without understanding the requirement of these units. This method often fails when the exact pain point isn’t targeted. Overcooling has been a problem since years for data centers, failing to ascertain the exact requirement.
While such a situation still exists, targeted or close-coupled cooling will be beneficial to racks with high density. Not only that, it will also balance the temperature of the entire unit, minimising any kind of wastage.
Lack of control:
Operators in the data centers are often bound to increase the power density in an equipment. For example, if there is a sudden surge of power from 6 to 12 kilo watt, there are chances of systems and units getting disbalanced and temporarily shutting off. In that case, especially the hardware and ancillary mechanical take time to recover. Moving parts, controls and sequences need to be commissioned in such a way that they recover faster. Sometimes even the controllers malfunction, therefore it is important to plan ahead for any kind of hindrance and install systems that can easily restart these units during an outage.
A continued analysis, study and innovation in this regard is important as the world is getting dependent on data and technology day by day. With advanced technological interventions it will be able to maintain high-density deployments in data centers.
With the ever increasing demand for faster and more powerful computing, data centers are continuously looking to outperform themselves. Data centers need to be fully equipped with best-in-class solutions. Tyrone offers end-to-end data center computing solutions that enable you address all these parameters in a cost effective manner.