Compute

Hardware Overview

The HPC cluster is connected to the university network over 10 Gbps Ethernet (GbE) for high-speed data transfer and provides access to over 1,024 CPU nodes, 50,000 CPU cores, and over 200 GPUs.  Compute nodes are wired with 10 GbE or a high-performance HDR200 InfiniBand (IB) interconnect running at 200 Gbps (with some nodes running HDR100 IB if the HDR200 IB is not supported).

If you would like to purchase hardware, please schedule a Consultation with the RC team first. 

CPU Nodes

The table below shows the feature names, number of nodes by partition type (public and private), and the RAM memory range per node. The feature name follows archspec microarchitecture specification. If you are interested in more information about the different partitions on the cluster, including the number of nodes per partition, running time limits, job submission limits, and RAM limits, see Partitions.

 

Feature Name

Number of Nodes (Public, Private)

skylake 0, 170
zen2 40, 292
zen 40, 300
ivybridge 64, 130
sandybridge 8, 0
haswell 230, 62
broadwell 756, 226
cascadelake 260, 88

GPU Nodes

The table below shows the GPU types, architecture, memory and other features of the GPUs on the HPC cluster. For more information about GPUs, see Working with GPUs.

 

GPU Type

Public Nodes (x # GPUs)

Private Nodes (x # GPUs)

V100 PCle 4(x2) 1(x2), 16GB
V100 SXM2 24(x4) 10(x4), 16GB
8(x4), 32GB
T4 2(x3-4) 1(x4)
A100 3(x4) 15(x2-8)
Quadro RTX 8000 0 2(x3)
A30 0 1(x3)
RTX A5000 0 6(x8)
RTX A6000 0 3(x8)

 

How Can Research Computing Support You?

Accelerate your research at any stage by leveraging our online user guides, hands-on training sessions, and one-on-one guidance.

Documentation

Training

Consultations & Office Hours

Contact Us