AMD and its Partners Share their Vision for "AI Everywhere, for Everyone" at CES 2026
At CES 2026, AMD
Chair and CEO Dr. Lisa Su detailed in the show's opening keynote how the
company's extensive portfolio of AI products and deep cross-industry
collaborations are turning the promise of AI into real-world impact.
The keynote showcased
major advancements from the data centre to the edge, with partners including
OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics,
AstraZeneca, Absci and Illumina detailing how they are using AMD technology to
power AI breakthroughs.
"At CES, our
partners joined us to show what's possible when the industry comes together to
bring AI everywhere, for everyone," said Dr. Lisa Su, chair and CEO of
AMD. "As AI adoption accelerates, we are entering the era of yotta-scale
computing, driven by unprecedented growth in both training and inference. AMD
is building the compute foundation for this next phase of AI through end-to-end
technology leadership, open platforms, and deep co-innovation with partners
across the ecosystem."
Compute
infrastructure is the foundation of AI, and accelerating adoption is driving an
unprecedented expansion from today's 100 zettaflops of global compute capacity
to a projected 10+ yottaflops in the next five years. Building AI
infrastructure at yotta-scale will require more than raw performance; it
demands an open, modular rack design that can evolve across product
generations, combining leadership compute engines with high-speed networking to
connect thousands of accelerators into a single, unified system.
The AMD "Helios" rack-scale platform is
the blueprint for yotta-scale infrastructure, delivering up to 3 AI exaflops of
performance in a single rack. It's designed to deliver maximum bandwidth and
energy efficiency for trillion-parameter training. "Helios" is
powered by AMD Instinct™ MI455X accelerators, AMD EPYC™ "Venice" CPUs
and AMD Pensando™ "Vulcano" NICs for scale-out networking, all
unified through the open AMD ROCm™ software ecosystem.
At CES, AMD provided
an early look at "Helios" and, for the first time unveiled the full
AMD Instinct MI400 Series accelerator product portfolio while previewing the
next-generation MI500 Series GPUs.
The latest addition
to the MI400 Series is the AMD Instinct MI440X GPU, designed
for on-premises enterprise AI deployments. The MI440X will power scalable
training, fine-tuning and inference workloads in a compact, eight-GPU form
factor that integrates seamlessly into existing infrastructure.
The MI440X builds on the recently
announced AMD Instinct MI430X GPUs, which are designed to deliver leadership performance and
hybrid computing for high-precision scientific, HPC and sovereign AI workloads.
MI430X GPUs will power AI factory supercomputers around the world,
including Discovery at
Oak Ridge National Laboratory and the Alice Recoque system,
France's first exascale supercompute





























Leave A Comment