At SC25, Phison pushes AI storage to Gen5 speeds, brings AI agents to everyday laptops

SuperComputing 2025 (SC25) delivered no shortage of big swings this week, but Phison presented a rare, cohesive vision that extends from the densest enterprise racks to the laptops in classrooms and corporate offices. At booth 4532, the storage leader debuted two new PCIe Gen5 enterprise SSDs, Pascari X201 and Pascari D201, and a live demo showcasing AI agents running on an integrated-GPU laptop using its aiDAPTIV+ technology. The message was clear: AI acceleration shouldn't be restricted to high-end GPUs or data center budgets.

PCIe Gen5 Muscle for AI and Cloud

Phison’s new Pascari X201 and D201 drives push Gen5 performance to the edge of the envelope:
  • Up to 14.5 GB/s read, 12 GB/s write
  • Up to 3.3M / 1.05M random read/write IOPS
  • Configurations up to 30.72 TB (X201) and 15.36 TB (D201)
The X201 targets high-intensity applications, including AI training nodes, analytics engines, financial modeling, and HPC workloads. The D201 is designed for hyperscalers and cloud builders who need high density with predictable QoS, particularly for object storage and large-scale database clusters. Both represent the steady march toward AI-first storage design: low latency, deterministic operations, and the throughput needed to saturate GPU clusters.

AI Agents on iGPUs, 25× Faster Than Before

The unexpected star of Phison’s booth was a consumer-class laptop demo. With aiDAPTIV+, the system turned an integrated GPU, normally the weak link in AI workflows, into a surprisingly capable AI agent platform.
 
Phison says the tech delivers:
  • Up to 25× faster AI agent performance
  • A drop in latency from 73 seconds to ~4 seconds in one real-world demo, GenAI inference on YouTube video content.
This is significant beyond mere convenience. Universities, IT departments, and early-stage businesses can now conduct meaningful AI experiments using their existing hardware. For students and corporate employees, this indicates a move toward AI agents becoming as commonplace as web browsers or office software.

Scaling Toward Extreme Capacity

Phison reminded SC25 attendees that the capacity race is not slowing. The company's Pascari D205V, a 122.88TB E3.L behemoth already shipping to selected OEMs, continues to set the ceiling for PCIe Gen5. Phison confirmed a roadmap path to 245TB, a number that would have sounded like science fiction just a few cycles ago.

Industry Voices at SC25

Michael Wu, GM and President of Phison US, framed the announcement in the larger arc of AI adoption: “Every sector is somewhere on the AI journey… Storage is vital at every stage.”

Why SC25 Cares

SC25 is increasingly the place where the AI stack, compute, networking, storage, and software gets pressure-tested. Phison’s lineup shows a company positioning itself not just as a NAND supplier but as a critical backbone for AI at every tier:
  • Client: AI agents on iGPUs
  • Enterprise: X201 for training and HPC
  • Cloud/hyperscale: D201 and the ultra-dense D205V series
With shipments of the X201 and D201 headed to enterprise customers by year-end and iGPU systems with aiDAPTIV+ coming in early 2026, the company is clearly betting on a future where AI workloads blur across devices and form factors.

Availability

  • Pascari X201 / D201: Shipping to select enterprise customers and OEMs by end of 2025
  • aiDAPTIV+ iGPU systems: OEM rollouts in early 2026
  • More details at phison.com
Phison didn't just bring new hardware to SC25; they presented a clear vision: AI infrastructure should be fast, scalable, power-efficient, and accessible to everyone, from hyperscale operators to students with a laptop. The future of AI won't be confined to one place, and Phison seems determined to connect it all.

MSI unveils next-gen AI, data center platforms at SC25

 
MSI stepped into the SuperComputing 2025 spotlight this week with a full slate of next-generation server and AI systems, signaling a major escalation in the company’s push into high-performance computing, hyperscale infrastructure, and enterprise AI.
 
At Booth #205, MSI debuted its ORv3 rack solution and a refreshed portfolio of DC-MHS–based compute platforms built in collaboration with AMD, Intel, and NVIDIA. The message was clear: the next era of data centers will be denser, more energy-efficient, and more modular, and MSI plans to be one of the vendors powering that shift.
 
Danny Hsu, General Manager of Enterprise Platform Solutions, framed it plainly: MSI wants to give operators scalable infrastructure that can move as fast as AI models evolve. “Our goal is to deliver scalable, energy-efficient infrastructure that empowers customers to accelerate AI development and next-generation computing with performance, reliability, and flexibility at scale,” Hsu said.

Rack-Scale Ambition: The ORv3 Platform

The star of MSI’s showcase was its ORv3 21-inch, 44OU rack, a fully validated, integrated design specifically designed for hyperscale cloud builders. Outfitted with sixteen CD281-S4051-X2 2OU DC-MHS servers, the rack features centralized 48V power, front-facing I/O, and a streamlined thermal design that maximizes CPU, memory, and storage density in every square inch.
 
Each node leverages AMD’s EPYC 9005 processors in a single-socket layout. Per-node, operators get 12 DDR5 DIMM slots and 12 E3.S PCIe 5.0 NVMe bays, providing ample capacity for AI pipelines, large-scale analytics, and bandwidth-intensive cloud workloads.
 
High-Density Compute for the Modern Data Center
MSI also expanded its DC-MHS Core Compute lineup, offering both AMD and Intel variants with TDP envelopes up to 500W. Available in 2U 4-node and 2U 2-node configurations, these systems target high-density environments where rack efficiency is king.
 
On the AMD EPYC side, MSI highlighted two platforms (CD270-S4051-X4 and X2), while Intel Xeon 6 versions (CD270-S3061-X4 and CD270-S3071-X2) bring expanded DDR5 memory and PCIe 5.0 storage options. All share a standardized modular architecture designed to simplify deployment, upgrades, and serviceability.
 
The enterprise-focused “CX” series broadened that theme with higher memory ceilings, extensive PCIe lanes, and configurations optimized for cloud, virtualization, and storage providers. Dual-socket Xeon 6 versions deliver up to 32 DIMM slots in 1U and 2U footprints, a density profile aimed at operators balancing compute with I/O-heavy workloads.

AI Systems Powered by NVIDIA Hopper and Blackwell

With AI dominating both the SC25 conversation and data center budgets, MSI backed up its hardware story with new NVIDIA-powered AI systems. These include MGX-based servers, DGX-class AI stations, and workstation-scale development nodes.
 
The flagship CG481-S6053 and CG480-S5063 4U servers support up to eight dual-width GPUs (up to 600W each), paired with either AMD EPYC 9005 CPUs or Intel Xeon 6 processors. These are built for heavyweight tasks: large language model training, deep learning acceleration, and NVIDIA Omniverse workloads.
 
A compact 2U option, the CG290-S3063, delivers four 600W GPUs in a single-socket Xeon 6 system, aimed at edge-inference clusters and smaller research deployments.
 
To bring AI development directly to the desktop, MSI introduced the AI Station CT60-S8060, a workstation built around NVIDIA’s GB300 Grace Blackwell Ultra Superchip, offering up to 784GB of unified memory. Its pitch: DGX-scale power without the data center footprint.

Why It Matters

SC25 is the annual pulse check for supercomputing, a place where vendors unveil real hardware, not vaporware. MSI’s move signals an intensifying competition among server manufacturers to meet surging AI demand while tackling the constraints everyone feels: power, heat, density, and time-to-deploy.
 
Their approach leans into modularity. DC-MHS standardization, ORv3 rack integration, and MGX compatibility allow operators to build AI-ready data centers faster and adapt them as GPUs evolve.
The broader takeaway is that data centers are shifting from “build once and upgrade later” to “assemble, scale, swap, repeat.” MSI’s portfolio pushes that philosophy from edge to hyperscale.
 
More details, demo videos, and supporting technical resources are available directly from MSI following the SC25 exhibition.
Characteristics of the graphene/In2Se3 heterostructure transport device that shows the spin chirality switch.  Credit Martin Gmitra from the Slovak Academy of Sciences and Marcin Kurpas from University of Silesia in Katowice.
Characteristics of the graphene/In2Se3 heterostructure transport device that shows the spin chirality switch. Credit Martin Gmitra from the Slovak Academy of Sciences and Marcin Kurpas from University of Silesia in Katowice.

Supercomputing sheds light on electrically controlling spin currents in graphene

In an European collaboration blending quantum materials science and high-performance computing, researchers have discovered how ferroelectric switching can modulate spin currents in a graphene-based heterostructure, a revelation made possible by supercomputers.

From Charge to Spin: A New Spintronics Platform

The study, "Ferroelectric switching control of spin current in graphene proximitized by In₂Se₃," published in Materials Futures, explores a heterostructure of graphene, a two-dimensional conductor, stacked atop a ferroelectric monolayer of In₂Se₃. The team found that switching the polarization of the In₂Se₃ layer reverses the sign of the charge-to-spin conversion coefficient in the graphene layer, effectively flipping the chirality (spin orientation pattern) of the generated spin current. In one configuration (17.5° twist angle between layers), an unconventional "radial Rashba field" emerged for one polarization direction, a rare phenomenon in planar heterostructures.

Supercomputing: The Hidden Engine

This project would have been impossible without extensive computing power. The researchers combined first-principles calculations (density-functional theory) with tight-binding modelling to capture electronic structure, spin-orbit coupling, ferroelectric polarization effects, and interface proximity influences.
 
Such simulations involve large Hamiltonian matrices, fine k-space sampling, spin-texture mapping, and multiple twist-angle geometries, tasks that scale poorly without parallel, high-performance systems. By leveraging supercomputing clusters, the team was able to:
  • Evaluate both polarization states of the ferroelectric layer.
  • Model two twist angles (0° and 17.5°) to identify emergent fields;
  • Extract charge-to-spin conversion coefficients and Rashba phase directly from computational data.
These capabilities underline how HPC is no longer just for weather and astrophysics; now it’s central to designing tomorrow’s spintronic devices.

Why It Matters

Modern electronics are approaching the limits of charge-based logic. Spintronics, using the electron’s spin rather than its charge, promises faster, lower-power, non-volatile devices. The challenge: controllably steering spin currents without bulky magnetic fields.
 
By showing that ferroelectric polarization can electrically flip spin current direction (and spin texture) in graphene, the study opens a pathway to magnet-free, ultra-efficient spin logic devices. In short, you apply a voltage, you flip a spin current, no magnetic coil needed.

A Timely Breakthrough for the HPC World

With the SC25 supercomputing conference opening next week in St. Louis, the research underscores a widening frontier: supercomputers aren’t just solving equations, they’re beginning to decode nature’s design language.
 
Although the study is not confirmed as an official SC25 presentation, its ideas are likely to circulate in hallway conversations, workshops, and poster sessions, where the fusion of physics, simulation, and computing continues to accelerate innovation.

Looking Ahead

While this work is theoretical (computational), the authors propose that the predicted effects "can be experimentally detected" under realistic conditions. The next step involves device fabrication, nanoscale spin current measurements, and benchmarking against conventional spintronic architectures.
 
The larger picture is HPC-driven material discovery. As supercomputers become more powerful and accessible, the timeline from concept to device may shorten, leading to a shift towards compute-to-create workflows, rather than the current synthesize-then-hope approach.