Last week Vertiv held its traditional meeting with the European press. On this occasion, the meeting took place at the multinational’s factory and laboratory near Padua (Italy). During a full day, the company insisted that the rapid evolution of artificial intelligence and data center technology is revolutionizing all IT infrastructure globally.

In this sense, Karsten Winther, president of Vertiv in EMEA, highlighted that “technological advancement requires power, cooling and modularity solutions to support growing computing power. In their experience, automated delivery using robots is already a tangible example of this transformation.”

Against this background, Winther highlighted the strategic importance of the collaboration that Vertiv has with NVIDIA to develop technologies that allow greater densification and efficient management of the heat generated by intensive computing. The fact is that the intensive use that is being made of Artificial Intelligence is directly impacting hardware consumption. So, the manager, who will leave his position on December 31, stressed that artisanal manufacturing and the rapid expansion of Vertiv’s production capacity are essential to meet the accelerated demand of the industry. In this sense, Winther pointed out that the modular customization of data centers is key to maintaining the speed of innovation and responding to changing market needs.

For Winther, the collaboration with NVIDIA is essential: “When NVIDIA launched Hopper GPU technology, our industry was just beginning to understand how to manage it. Now we work closely in research and development to enable all that computing power.”

Carlo Ruiz, Vice President of Operations for EMEA at Nvidia, assured that “at Nvidia we believe that each nation must have the critical infrastructure to drive its own journey in AI, create its own intelligence under its own rules and regulations: that is sovereignty in AI. The current challenge goes far beyond having GPUs: the complexity of an ‘AI factory’ requires extreme design at the system level, including CPUs, DPUs, networks and specialized software. Our differentiation is the extreme co-design of all this infrastructure.

Ruiz stressed that more than 80% of the engineering effort at Nvidia is dedicated to software tools for these AI factories since they consider that software is the true enabler that allows the intelligence generated by the infrastructure to be realized. The manager stated that “today, most technology is used for inference at scale, surpassing the era of large model training. “Massive inference is the most complex challenge in AI today and requires stoic architectures, low latency, high energy efficiency and controlled costs.”

Challenges and solutions in AI data centers

Winther explained that the exponential growth of computing power brings critical challenges, especially in power and cooling management. “We are talking about going from 70 kilowatts two years ago to doubling that figure in the next 24 months. Therefore, the key is to create technology that allows more power in less physical space and manage the heat generated efficiently.”

This fact could be seen during the visit to the factory and laboratory in Italy. Vertiv has had to double its production capacity with specialized factories, many of them operating with a high degree of manual manufacturing due to the precision required. And this is essential to ensure the demand for AI and data centers that exists today. As Winther said, “if we couldn’t increase this capacity, the world would not be able to benefit from AI as quickly as necessary.”

Modularity and education for rapid adaptation

The speed with which new data centers are deployed is another of the key aspects that are defining current technology. Winther said what previously took two years can now be completed in six months thanks to modular prefabrication and reduced on-site work. “We are talking about systems that can be designed and built to be flexible in the face of changes in density or future use of the data center. In addition, Vertiv invests heavily in the continuous education and training of its teams and clients, aware that the most advanced technology only works with trained professionals. AI is made by people, and those people must be prepared to get the most out of it,” said the manager.

Liquid cooling for AI

Within this context, Vertiv presented the cooling system Vertiv CoolCenter Immersionexpanding its global liquid cooling portfolio to support AI and high-performance computing (HPC) environments that are ready to maximize the superior thermal properties of liquid cooling.

Immersion cooling immerses entire servers in a dielectric liquid, providing efficient and uniform heat removal across all components, where power densities and thermal loads exceed the limits of traditional air cooling methods. Vertiv CoolCenter Immersion operates as a complete liquid cooling architecture, enabling reliable heat removal for dense computing ranging from 25 kW to 240 kW per system.

Vertiv CoolCenter Immersion is available in multiple configurations, including self-contained and multi-tank options, with cooling capacities from 25 kW to 240 kW. Each system includes an internal or external liquid reservoir, coolant distribution unit (CDU), temperature sensors, variable speed pumps and fluid lines, providing precise temperature control and consistent thermal performance. Dual power supplies and redundant pumps provide high cooling availability, while integrated monitoring sensors, a 9-inch touch screen and connectivity to the building management system (BMS) simplify system operation and visibility. The system design also allows for heat reuse, supporting more efficient thermal management strategies across facilities and aligning with broader energy efficiency goals.