MSI unveils next-gen AI, data center platforms at SC25

 
MSI stepped into the SuperComputing 2025 spotlight this week with a full slate of next-generation server and AI systems, signaling a major escalation in the company’s push into high-performance computing, hyperscale infrastructure, and enterprise AI.
 
At Booth #205, MSI debuted its ORv3 rack solution and a refreshed portfolio of DC-MHS–based compute platforms built in collaboration with AMD, Intel, and NVIDIA. The message was clear: the next era of data centers will be denser, more energy-efficient, and more modular, and MSI plans to be one of the vendors powering that shift.
 
Danny Hsu, General Manager of Enterprise Platform Solutions, framed it plainly: MSI wants to give operators scalable infrastructure that can move as fast as AI models evolve. “Our goal is to deliver scalable, energy-efficient infrastructure that empowers customers to accelerate AI development and next-generation computing with performance, reliability, and flexibility at scale,” Hsu said.

Rack-Scale Ambition: The ORv3 Platform

The star of MSI’s showcase was its ORv3 21-inch, 44OU rack, a fully validated, integrated design specifically designed for hyperscale cloud builders. Outfitted with sixteen CD281-S4051-X2 2OU DC-MHS servers, the rack features centralized 48V power, front-facing I/O, and a streamlined thermal design that maximizes CPU, memory, and storage density in every square inch.
 
Each node leverages AMD’s EPYC 9005 processors in a single-socket layout. Per-node, operators get 12 DDR5 DIMM slots and 12 E3.S PCIe 5.0 NVMe bays, providing ample capacity for AI pipelines, large-scale analytics, and bandwidth-intensive cloud workloads.
 
High-Density Compute for the Modern Data Center
MSI also expanded its DC-MHS Core Compute lineup, offering both AMD and Intel variants with TDP envelopes up to 500W. Available in 2U 4-node and 2U 2-node configurations, these systems target high-density environments where rack efficiency is king.
 
On the AMD EPYC side, MSI highlighted two platforms (CD270-S4051-X4 and X2), while Intel Xeon 6 versions (CD270-S3061-X4 and CD270-S3071-X2) bring expanded DDR5 memory and PCIe 5.0 storage options. All share a standardized modular architecture designed to simplify deployment, upgrades, and serviceability.
 
The enterprise-focused “CX” series broadened that theme with higher memory ceilings, extensive PCIe lanes, and configurations optimized for cloud, virtualization, and storage providers. Dual-socket Xeon 6 versions deliver up to 32 DIMM slots in 1U and 2U footprints, a density profile aimed at operators balancing compute with I/O-heavy workloads.

AI Systems Powered by NVIDIA Hopper and Blackwell

With AI dominating both the SC25 conversation and data center budgets, MSI backed up its hardware story with new NVIDIA-powered AI systems. These include MGX-based servers, DGX-class AI stations, and workstation-scale development nodes.
 
The flagship CG481-S6053 and CG480-S5063 4U servers support up to eight dual-width GPUs (up to 600W each), paired with either AMD EPYC 9005 CPUs or Intel Xeon 6 processors. These are built for heavyweight tasks: large language model training, deep learning acceleration, and NVIDIA Omniverse workloads.
 
A compact 2U option, the CG290-S3063, delivers four 600W GPUs in a single-socket Xeon 6 system, aimed at edge-inference clusters and smaller research deployments.
 
To bring AI development directly to the desktop, MSI introduced the AI Station CT60-S8060, a workstation built around NVIDIA’s GB300 Grace Blackwell Ultra Superchip, offering up to 784GB of unified memory. Its pitch: DGX-scale power without the data center footprint.

Why It Matters

SC25 is the annual pulse check for supercomputing, a place where vendors unveil real hardware, not vaporware. MSI’s move signals an intensifying competition among server manufacturers to meet surging AI demand while tackling the constraints everyone feels: power, heat, density, and time-to-deploy.
 
Their approach leans into modularity. DC-MHS standardization, ORv3 rack integration, and MGX compatibility allow operators to build AI-ready data centers faster and adapt them as GPUs evolve.
The broader takeaway is that data centers are shifting from “build once and upgrade later” to “assemble, scale, swap, repeat.” MSI’s portfolio pushes that philosophy from edge to hyperscale.
 
More details, demo videos, and supporting technical resources are available directly from MSI following the SC25 exhibition.
Like
Like
Happy
Love
Angry
Wow
Sad
0
0
0
0
0
0
Comments (0)