A special keynote at ISC 2022 shows the future of HPC

Researchers tackling today’s grand challenges are gaining momentum through accelerated computing, as showcased at ISC, Europe’s annual gathering of supercomputing experts.

Some build digital twins to simulate new energy sources. Some use AI + HPC to peer deeply into the human brain.

Others are pushing HPC to the cutting edge with highly sensitive instruments or accelerating simulations on hybrid quantum systems, said Ian Buck, vice president of accelerated computing at NVIDIA, during an ISC special address in Hamburg.

Deliver 10 AI exaflops

For example, a new Los Alamos National Laboratory (LANL) supercomputer called Venado will deliver 10 exaflops of AI performance to advance work in areas such as materials science and renewable energy.

LANL researchers are targeting 30x speedups in their multi-physics computing applications with NVIDIA GPUs, CPUs, and DPUs in the system, named after a peak in northern New Mexico.

Venado will use NVIDIA Grace Hopper Superchips to run workloads up to 3x faster than previous GPUs. It also contains NVIDIA Grace processor superchips to deliver twice the performance per watt of traditional processors over a long tail of unaccelerated applications.

BlueField is growing

The LANL system is one of the latest of many around the world to adopt NVIDIA BlueField DPUs to offload and accelerate communication and storage tasks from host processors.

Similarly, the Texas Advanced Computing Center is adding BlueField-2 DPUs to the NVIDIA Quantum InfiniBand network on Lonestar6. It will become a development platform for cloud-native supercomputing, hosting multiple users and applications with bare metal performance while securely isolating workloads.

“It’s the architecture of choice for next-generation supercomputers and HPC clouds,” Buck said.

Exascale in Europe

In Europe, NVIDIA and SiPearl are collaborating to expand the ecosystem of developers building exascale computing on Arm. The work will help users in the region port applications to systems that use SiPearl’s Rhea and future Arm-based processors, as well as NVIDIA’s accelerated computing and networking technologies.

Japan’s Center for Computational Sciences at the University of Tsukuba combines NVIDIA H100 Tensor Core GPUs and x86 processors on an NVIDIA Quantum-2 InfiniBand platform. The new supercomputer will tackle jobs in climatology, astrophysics, big data, AI, and more.

The new system will join the 71% of the latest TOP500 list of supercomputers that have adopted NVIDIA technologies. Additionally, 80% of new systems on the list also use NVIDIA GPUs, networking, or both, and NVIDIA’s networking platform is the most popular interconnect for TOP500 systems.

HPC users are adopting NVIDIA technologies because they deliver the highest application performance for established supercomputing workloads – simulation, machine learning, real-time edge processing – as well as emerging workloads such as quantum simulations and digital twins.

Powering up with Omniverse

Showing what these systems can do, Buck played a demonstration of a virtual fusion power station that researchers from the UK Atomic Energy Authority and the University of Manchester are building in NVIDIA Omniverse. The digital twin aims to simulate in real time the entire power plant, its robotic components, and even the behavior of the fusion plasma at its core.

NVIDIA Omniverse, a global 3D design and simulation collaboration platform, allows remote project researchers to work together in real time while using different 3D applications. They aim to enhance their work with NVIDIA Modulus, a framework for creating physics-based AI models.

“It’s incredibly complex work that paves the way for the clean renewable energy sources of tomorrow,” Buck said.

AI for medical imaging

Separately, Buck described how researchers created a library of 100,000 synthetic images of the human brain on NVIDIA Cambridge-1, a supercomputer dedicated to advancing healthcare with AI.

A team from King’s College London has used MONAI, an AI framework for medical imaging, to generate realistic images that can help researchers see how diseases like Parkinson’s develop.

“This is a great example of HPC+AI making a real contribution to the science and research community,” Buck said.

State-of-the-art HPC

Increasingly, HPC work extends beyond the supercomputing centre. Observatories, satellites and new types of laboratory instruments must broadcast and visualize data in real time.

For example, lightsheet microscopy work at the Lawrence Berkeley National Lab uses NVIDIA Clara Holoscan to see life in real time at the nanoscale, work that would take days on processors.

To help bring supercomputing to the edge, NVIDIA is developing Holoscan for HPC, a highly scalable version of our imaging software to accelerate scientific discovery. It will run on accelerated platforms, from Jetson AGX modules and appliances to A100 quad servers.

“We can’t wait to see what researchers do with this software,” Buck said.

Accelerating quantum simulations

In another compute-intensive vector, Buck reported on the rapid adoption of NVIDIA cuQuantum, a software development kit for accelerating quantum circuit simulations on GPUs.

Dozens of organizations are already using it in research in many fields. It is integrated with major quantum software frameworks so users can access GPU acceleration without any additional coding.

More recently, AWS announced the availability of cuQuantum in its Braket service. And it demonstrated how cuQuantum can deliver up to 900x speedup on quantum machine learning workloads while reducing costs by 3.5x.

“Quantum computing has enormous potential, and simulating quantum computers on GPU supercomputers is key to getting us closer to valuable quantum computing,” Buck said. “We’re really excited to be at the forefront of this work,” he added.

To learn more about Accelerated Computing for HPC, watch the full talk below.

Comments are closed.