Our Hardware

Our computing resources combine GPU acceleration, multi-node CPU systems, and access to national-scale supercomputing infrastructure. Together, these platforms allow us to benchmark, optimise, and scale HPC workloads for both competition and research.

XENON GPU Cluster

16× NVIDIA A100 GPUs

The XENON GPU cluster consists of 4 XENON NITRO GN29A Duo 2RU nodes. Each node is equipped with 2 AMD EPYC 7313 CPUs, 4 NVIDIA A100 40GB SXM4 GPUs, and 512GB of DDR4-3200 ECC RAM delivering a combined total of 16 A100 GPUs and 8 EPYC processors across the cluster.

Each node also carries a 1.9TB PM9A3 NVMe SSD for fast local storage, alongside dual 240GB M5400 PRO SATA SSDs for the OS. The cluster is interconnected via a Mellanox SX6025 high-speed switch and a Dell PowerConnect 5548, with dual 3000W Titanium redundant PSUs per node ensuring reliability under sustained load.

Node Specifications

  • Nodes: 4× XENON NITRO GN29A Duo (2RU)
  • CPU: 2× AMD EPYC 7313 per node (8 total)
  • RAM: 16× 32GB DDR4-3200 LP ECC RDIMM (512GB per node)
  • GPU: 4× NVIDIA A100 40GB SXM4 per node (16 total)
  • OS Disk: 2× M5400 PRO 240GB SATA SSD per node
  • Data Disk: PM9A3 1.9TB NVMe SSD per node
  • PSU: 2× 3000W Titanium RPSU per node
  • Networking: Mellanox SX6025 1RU + Dell PowerConnect 5548 1RU

Key Uses

  • GPU acceleration for scientific and AI workloads
  • Hybrid CPU-GPU workloads leveraging AMD EPYC cores alongside A100s
  • Development and optimisation of GPU-enabled applications
  • Testing parallel performance on modern accelerator hardware
  • Supporting compute-heavy competition applications

Why It Matters

The XENON system gives our team hands-on experience with the type of hardware increasingly used in cutting-edge HPC environments. It lets us explore performance tuning, memory behaviour, and workload scaling on enterprise-grade GPUs.

Raijin Test Cluster

12 Compute Nodes

Our Raijin-based test cluster gives us a dedicated multi-node environment for distributed computing experiments. Built from 12 compute nodes, it is used for benchmarking, MPI communication tests, and cluster-level tuning.

This cluster is especially important for competition preparation, as it allows us to simulate the kinds of workloads found in the Student Cluster Competition. We use it to test node-to-node scaling, explore hybrid MPI+OpenMP configurations, and evaluate how different system settings affect overall performance.

Key Uses

  • HPL benchmarking and performance tuning
  • MPI scaling and communication analysis
  • Testing hybrid MPI + OpenMP job layouts
  • Cluster design, deployment, and optimisation practice

Why It Matters

The Raijin test cluster gives us direct control over a real HPC system. This makes it an ideal platform for learning how hardware, networking, scheduling, and workload placement interact in a distributed environment.

NCI Gadi

National Supercomputing Access

In addition to our local systems, we also have access to Gadi at the National Computational Infrastructure (NCI), one of Australia’s major supercomputing resources. Gadi provides large-scale CPU and GPU resources for advanced research and production workloads.

Access to Gadi allows us to test applications at a scale that goes well beyond our local infrastructure. It is a valuable resource for validating optimisations, running larger jobs, and experiencing the tools and job scheduling environments used in professional HPC settings.

Key Uses

  • Large-scale research and simulation workloads
  • Testing jobs on nationally significant HPC infrastructure
  • Running workloads that exceed local hardware capacity
  • Experience with PBS Pro scheduling on national supercomputing infrastructure

Why It Matters

Gadi connects our team to real-world national HPC infrastructure. It helps bridge the gap between local experimentation and the larger-scale systems used by researchers and industry across Australia.

Pawsey Setonix

Pawsey Supercomputing Research Centre

We also have access to Setonix at the Pawsey Supercomputing Research Centre in Perth, one of the world's most energy-efficient supercomputers and a flagship resource for Australian research. Setonix provides large CPU and GPU partitions across thousands of nodes, built on AMD EPYC processors and AMD Instinct GPU accelerators.

Access to Setonix allows our team to run workloads at a national scale, validate performance optimisations, and gain hands-on experience with the tools, schedulers, and workflows used in professional HPC environments.

Key Uses

  • Large-scale GPU-accelerated and CPU-parallel workloads
  • Validation of optimisations beyond local hardware capacity
  • Experience with Slurm scheduling on a national supercomputer
  • Supporting memory-intensive and high-throughput research jobs

Why It Matters

Setonix gives our team access to world-class infrastructure and exposes us to the scale of computing used by leading researchers across Australia. It reinforces our ability to develop, test, and optimise HPC applications in a real national supercomputing environment.