SC25 pushes network frontiers as Pegatron unveils modular server ambitions

In STL, the high-performance computing world thrives on pushing limits, and this year’s SC25 conference delivered another leap forward, both on the show floor and across the wires of the legendary SCinet network.
 
Pegatron, a global leader in electronics manufacturing, showcased its next-generation server roadmap, emphasizing the company’s vision for modular, power-efficient systems engineered for the AI-accelerated era. Today’s press release has highlighted a strategic expansion into advanced rack-scale design, with an emphasis on flexibility, field-replaceable modules, and full-stack energy optimization. But even that technical momentum was matched, if not eclipsed, by the sheer scale of the network beneath attendees’ feet.

SCinet Hits a New Threshold: 13.72 TB/s

SCinet, the volunteer-built engineering marvel that powers every Supercomputing conference, announced its highest throughput ever recorded: 13.72 terabytes per second (TB/s) for SC25.
 
To put this into perspective, SCinet’s wide-area network (WAN) backbone has grown at a pace few global networks can match:
  • SC25 (St. Louis): 13.72 TB/s
  • SC24 (Atlanta): 8.71 Tbps
  • SC23 (Denver): 6.71 Tbps
  • SC22: 5.01 Tbps
  • SC19: 4.22 Tbps
Every year, SCinet is torn down and rebuilt by an army of volunteers of engineers, network architects, and researchers from around the world, who converge to create the fastest temporary network on Earth. Its sole mission: enable the bleeding-edge demos that define the HPC community.
 
As datasets balloon and GPU clusters grow hungrier by the day, SCinet’s growth isn’t a luxury; it’s a necessity.

Pegatron’s Modular Pivot: A Server for the AI Era

In its SC25 release, Pegatron detailed its next-gen server platform built around modularity, thermal efficiency, and rapid deployment, all themes dominating this year’s conference.
 
Key takeaways from Pegatron’s announcement include:
• Modular AI-ready infrastructure
Pegatron outlined blade-style compute modules designed to scale from traditional HPC to dense GPU and accelerator configurations.
• Energy-optimized design
The company emphasized new power-distribution and cooling architectures intended to support the surge of high-wattage AI accelerators without sacrificing stability or serviceability.
• Manufacturing muscle
 
Leveraging Pegatron’s global supply chain, the company aims to support hyperscalers, enterprise AI builders, and research labs that need rapid, consistent deployment cycles as models grow more compute-intensive.
 
Pegatron’s SC25 presence signals its intent to be more than an OEM; it wants to shape the future of rack-scale AI infrastructure.

Why the Two Stories Intersect

SCinet’s explosive bandwidth growth and Pegatron’s hardware ambitions aren’t isolated trends, they’re parallel responses to the same fundamental shift: AI workloads are becoming the dominant driver of HPC system design.
 
Training runs now require:
  • Uncompressed terabyte-scale dataset transfers
  • Multi-site distributed training
  • Real-time visualization pipelines
  • Exascale-class telemetry
At SC25, the relationship between compute, cooling, networking, and manufacturing has never been more visible. Pegatron’s modular hardware approach pairs naturally with a world where SCinet-class networks will soon be the norm, not the exception.

A Future Built on Collaboration and Momentum

SCinet’s volunteers, the invisible heroes of the SC conference, have once again demonstrated what’s possible when the global HPC community collaborates without restraint.
 
Pegatron’s announcement adds another layer of optimism: that the companies powering AI and HPC infrastructure are evolving just as quickly as the workloads they support.
 
SC25 feels like a hinge moment. Faster networks. Smarter servers. Greener cooling systems. More modular racks. And an industry that’s learning to innovate at the pace of AI itself.
 
The bar has officially been raised. And judging by the energy on the SC25 floor, the community seems ready to clear it again next year.
Darren Burgess, Castrol’s Data Center Cooling
Darren Burgess, Castrol’s Data Center Cooling

Castrol expands its thermal management empire with strategic investment in ECS

In STL, the rising heat of next-generation AI met its match at SC25 as Castrol announced a strategic investment in Electronic Cooling Solutions (ECS), a Santa Clara–based thermal engineering firm known for its deep bench of CFD modeling, reliability testing, and design-for-deployment expertise. The move signals Castrol’s shift from “fluid supplier” to full-stack thermal partner for data centers navigating the swelling power demands of artificial intelligence and high-performance computing.
 
Between keynotes, we sat down with Darren Burgess, Castrol’s Data Center Cooling specialist from Austin, Texas. In a conversation that bounced from Bitcoin mines to hyperscale design rooms, Burgess laid out why Castrol is betting big on immersion cooling, and why ECS is the linchpin.

Immersion Cooling’s Momentum and Why Single-Phase Leads Today

Burgess described immersion cooling as “the simplest path to big power savings,” emphasizing single-phase immersion as the star of today’s deployments. Bitcoin miners have already paved the way: predictable thermals, easy heat capture, fewer moving parts, and measurable reductions in energy overhead.
 
“The industry is learning what miners figured out early,” Burgess told us. “When the density goes up, air just taps out.”
 
Two-phase immersion may be the future, but Castrol is positioning it carefully. “It’s coming,” Burgess said. “But the industry needs predictable supply chains and stability first. That’s where Castrol’s global network becomes an advantage.”

The Glycol Problem No One Talks About

Castrol’s data center expansion isn’t just about immersion. Burgess highlighted a quieter but critical battleground: the chemistry inside traditional hydronic loops. Specifically, propylene glycol (PG 25), a staple in cooling systems, whose stability is often taken for granted.
 
“PG is like a living system,” Burgess said. “If you don’t monitor it, corrosion becomes an invisible tax. Fluid health isn’t optional anymore, it’s uptime insurance.”
 
Castrol is developing next-gen formulations, including detoxified ethylene glycol options with higher-temperature tolerance.

ECS + Castrol: A Full-Stack Thermal Alliance

The newly announced investment gives Castrol something it has never possessed at a global scale: deep thermal engineering capabilities that touch every layer of system design.
ECS brings:
  • Room-to-rack thermal modeling
  • System-level CFD
  • Failure-mode and reliability analysis
  • Immersion and liquid cooling design validation
  • Acclimation, condensation, and corrosion forensic services
Their portfolio includes AI module liquid-cooling designs up to 17 kW, corrosion root-cause tracing, and environmental acclimation studies for hyperscale data centers.
 
With Castrol’s investment, Bharat Vats, an industry veteran and former CEO of Atom Power, has been named President and CEO of ECS. His mandate: scale up ECS’s impact across hyperscalers, OEMs, cloud providers, and energy-intensive AI labs.
 
“Working with Castrol opens the door for ECS to reach the entire data center ecosystem,” Vats said. “Together, we can accelerate the shift to more efficient cooling architectures.”

Why This Investment Matters Now

A recent Castrol-commissioned survey found that 74% of data-center experts now believe liquid cooling is the only path forward for today’s AI power densities. Yet many operators hesitate due to integration complexity and a lack of trusted partners.
 
Castrol believes combining its supply-chain muscle with ECS’s engineering precision will remove those barriers.
 
Peter Huang, Castrol’s Global VP of Data Centre Thermal Management, put it plainly: “The industry needs partners that can guide them from whiteboard to deployment. Castrol wants to be that end-to-end partner.”

A Turning Point for AI-Era Data Centers

SC25 has made one thing obvious: thermal is no longer a back-of-house concern. It is the governing constraint of AI. The players who master heat will be the ones who shape the computing landscape of the next decade.
 
With Castrol expanding from automotive lubricants into immersion, hydronics, and now full-stack thermal design, and ECS bringing decades of analysis and validation expertise, the partnership lands at a pivotal moment.
 
Together, they’re sending a clear message to hyperscalers and AI labs everywhere: The future isn’t just faster. The future runs colder.

Abstraction, automation: Scientific computing enters a new era at SC25

At the SC25 show, the ACM and IEEE-CS Award Presentations provided more than recognition; they reflected on the past and future of scientific computing. The keynote, "Abstraction and Automation: From Workflows to Intelligent Systems and the Future of Scientific Discovery," was delivered by Ewa Deelman of the University of Southern California, a pioneer known for leading the development of the Pegasus Workflow Management System.

From Code to Workflows: Making Complexity Human-Scale

Deelman traced the layered evolution of scientific computing. Initially, researchers worked with machine code and manually scheduled tasks. Subsequently, scripts, batch systems, and workflow engines emerged, serving not as mere conveniences but as tools to preserve scientific intent while managing complexity.
 
Pegasus emerged from this philosophy. Rather than requiring scientists to think like system schedulers, Pegasus translated high-level scientific descriptions into reliable execution across diverse environments, ranging from high-performance supercomputers to distributed grids. The aim was not automation alone, but rather reproducibility, transparency, and trust.

Automation Arrives, And Changes the Scientific Lifecycle

Deelman shifted to the present, where automation has moved far beyond workflow execution. With artificial intelligence now embedded throughout the research pipeline, systems are:
  • assisting with hypothesis generation
  • optimizing and adapting workflows
  • monitoring results in real time
  • and supporting interpretation and publication
In her words, systems are no longer just running science; they are reasoning about it. For fields where data volumes exceed human capacity, cognitive automation has become essential rather than optional.

Transparency, Trust, and the Human Role

The rise of intelligent automation brings new responsibilities. Deelman raised questions that resonated across the SC25 audience:
  • How do we ensure transparency when systems make autonomous choices?
  • What does scientific accountability look like when recommendations come from models, not humans?
  • Where must human judgment remain non-negotiable?
Rather than replacing scientists, Deelman argued, automation amplifies the need for critical thinking and creativity. Scientific skepticism becomes more, not less, important when systems can produce convincing results without explanation.

Design Principles That Endure Through Change

Despite shifting technologies, Deelman highlighted the principles that have sustained Pegasus for decades:
  • abstraction that clarifies rather than conceals
  • automation that supports scientific intent, not overrides it
  • reproducibility as a foundation, not a feature
These values, she emphasized, must guide the next generation of intelligent systems.

Looking Forward: Machines as Partners in Discovery

Deelman closed with optimism grounded in realism. Intelligent systems will soon help explore parameter spaces unreachable by human reasoning alone, uncover patterns hidden in massive datasets, and accelerate breakthroughs that once took decades.
 
But progress requires discipline: transparent algorithms, accountable design, and a scientific culture that refuses to outsource curiosity.
 
The applause that followed made clear that the supercomputing community understood the moment. At SC25, the message was unmistakable: scientific computing is entering a new era. Not one defined by machines replacing thought, but by machines expanding what thought can reach.