Vertiv Introduces New Modular Liquid Cooling Infrastructure Solution
Vertiv, a global leader in critical digital infrastructure, announced
new configurations of the Vertiv MegaMod HDX, a prefabricated power and liquid cooling
infrastructure solution engineered for high-density computing environments,
including artificial intelligence (AI) and high-performance computing (HPC)
deployments. The new configurations give operators flexibility to support
rapidly increasing power and cooling requirements while optimizing space and
deployment speed. The models are available globally.
Vertiv MegaMod HDX integrates direct-to-chip liquid cooling with
air-cooled architectures to meet the intense thermal demands of AI workloads,
supporting pod-style AI environments and advanced GPU clusters. The new compact
solution has a standard module height and a maximum of 13 racks and power
capacity up to 1.25 MW; the combo solution has an extended-height design with a
maximum of 144 racks, supporting power capacities up to 10 MW. Both can support
rack densities from 50 kW up to more than 100 kW per rack. The hybrid cooling
architectures integrate direct-to-chip liquid cooling with air cooling for
efficient, high-density thermal management, while the prefabricated modular
designs enable accelerated deployment and allow customers to scale their data centers
as demand grows.
"Today's
AI workloads demand cooling solutions that go beyond traditional approaches.
With the Vertiv MegaMod HDX available in both compact and combo solution
configurations, organizations can match their facility requirements while
supporting high-density, liquid-cooled environments at scale. Our designs
deliver what data centers need most—reliable performance, operational
efficiency, and the ability to scale their AI infrastructure with
confidence," said Viktor Petik, senior vice president, infrastructure
solutions at Vertiv.
The
Vertiv MegaMod HDX models feature innovative hybrid cooling
architecture, combining direct-to-chip liquid cooling with adaptable air
systems in a fully integrated, prefabricated pod. The solutions feature distributed
redundant power architecture enabling continuous operation even if one module
goes offline. Additionally, the buffer-tank thermal backup system allows GPU
clusters to maintain stable operations during maintenance or load transitions.
This factory-integrated design enables repeatable precision in deployment while
providing cost certainty for planning and scaling AI infrastructure.






























Leave A Comment