Data centers have always required specialized cooling solutions, and the increasing complexity of modern workloads is further pushing the boundaries of thermal management in data centers. Innovation in data centers continues to be shaped by macro forces and technology trends related to AI, according to a recent report by Vertiv.

Drawing on expertise from across the organization, the Vertiv Frontiers study details the technology trends driving current and future innovation in data centers, from power boosting for AI to digital twins and adaptive liquid cooling.

Extreme densification

“The data center industry continues to rapidly evolve the way it designs, builds, operates and maintains data centers, in response to the demands for density and deployment speed of AI factories,” said Scott Armul, chief product and technology officer at Vertiv. “We see cross-cutting forces across technologies, including extreme densification, driving transformative trends such as higher-voltage direct current (DC) power architectures and advanced liquid cooling, critical to achieving the gigawatt-level scaling critical to AI innovation. “On-site power generation and digital twin technology are also expected to help advance the scale and speed of AI adoption in data centers.”

The Vertiv Frontiers report, based on previous annual predictions, identifies the macro forces driving innovation in data centers: extreme densification—accelerated by AI and HPC workloads; gigawatt-level scaling at high speed—data centers are now being deployed rapidly and at an unprecedented scale; the data center as a computing unit—the AI ​​era requires data centers to be built and operated as a single system; and silicon diversification, which requires data center infrastructure to adapt to an increasing range of chips and computing capabilities.

The 5 key trends in data centers

The report details how these macro forces have shaped five key trends affecting specific areas of the data center landscape.

1. Energy boost for AI

Most data centers today still rely on hybrid AC/DC power distribution from the network to IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is coming under pressure as power densities in data centers increase, driven largely by AI workloads. The move to higher voltage DC architectures enables significant reductions in current, lead size, and number of conversion stages, while centralizing room-level power conversion in data centers. On-site generation and microgrids will also drive this evolution.

2. Distributed AI

The billions of dollars invested to date in AI data centers to support large language models (LLMs) have gone toward supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes that, although AI is becoming increasingly important, on-premise, private or hybrid data centers will continue to be key for highly regulated sectors such as financial, healthcare or defense, due to data residency, security and latency requirements.

3. Energy autonomy accelerates

Onsite power generation capacity has been essential for the resilience of many data centers for decades. However, current challenges in power availability are driving greater autonomy strategies, especially in data centers dedicated to AI. Investment in own generation and in models such as Bring Your Own Power (and Cooling) will become increasingly common.

4. Design and operations powered by digital twins

With AI workloads becoming denser, data centers need to deploy complex infrastructure more quickly. Thanks to digital twins, data centers can be designed, simulated and deployed virtually as computing units, significantly reducing time-to-token and facilitating gigawatt-level scaling.

5. Adaptive and resilient liquid cooling

Liquid cooling has become a critical element for a growing number of data centers, driven by the high density of AI workloads. In turn, AI makes it possible to optimize liquid cooling systems in data centers, anticipating failures and improving operational efficiency, which translates into greater reliability and uptime of hardware and associated data.