Compute
Hardware Overview
The HPC cluster is connected to the university network over 10 Gbps Ethernet (GbE) for high-speed data transfer and provides access to over 1,024 CPU nodes, 50,000 CPU cores, and over 200 GPUs. Compute nodes are wired with 10 GbE or a high-performance HDR200 InfiniBand (IB) interconnect running at 200 Gbps (with some nodes running HDR100 IB if the HDR200 IB is not supported).
If you would like to purchase hardware, please schedule a Consultation with the RC team first.
CPU Nodes
The table below shows the feature names, number of nodes by partition type (public and private), and the RAM memory range per node. The feature name follows archspec microarchitecture specification.
GPU Nodes
The table below shows the GPU types, architecture, memory and other features of the GPUs on the HPC cluster. For more information about GPUs, see Working with GPUs. If you are interested in more information about the different partitions on the cluster, including the number of nodes per partition, running time limits, job submission limits, and RAM limits, see Partitions.
Once you know what GPU type you want to use, you can learn to specify the GPU type you would like to use.
How Can Research Computing Support You?
Accelerate your research at any stage by leveraging our online user guides, hands-on training sessions, and one-on-one guidance.