Research Computing
Connecting the research community at Northeastern University with high performance computing solutions.
Learn about High Performance Computing at Northeastern
As a researcher at Northeastern University, you can take advantage of the comprehensive research computing offerings and services available to you—including access to centralized high performance computing (HPC) clusters, storage, visualization, software, high-level technical and scientific consultations, documentation, and training.
Explorer is a high performance computing (HPC) resource for the Northeastern University research community. The Explorer cluster is located in the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, MA. MGHPCC is a 90,000 square-foot, 15 megawatt research computing and data center facility that houses computing resources for six institutions: Northeastern, BU, Harvard, MIT, UMass, and Yale.
The Explorer cluster provides access to over 45,000 CPU cores and over 525 GPUs to all Northeastern faculty and students free of charge. Hardware currently available for research consists of a combination of Intel Xeon (Sapphire Rapids, Ice Lake, Cascadelake, Skylake, Broadwell, Haswell, Sandybridge, and Ivybridge) and AMD (Zen, Zen2, Zen3, and Zen4) CPU microarchitectures. Additionally, a selection of NVIDIA Pascal (P100), Volta (V100), Turing (T4), Ampere (A100), RTX (A5000 and A6000), Lovelace (L40), and Hopper (H200 and H100) GPUs.
Explorer is connected to the university network over 10 Gbps Ethernet (GbE) for high-speed data transfer, and Explorer provides 6 PB of available storage on a high-performance file system. Compute nodes are connected with either 10 GbE or high data rate InfiniBand (200 Gbps or 100 Gbps), supporting all types and scales of computational workloads.
Connecting You to the Power of Explorer
As a researcher at Northeastern University, you can take advantage of the comprehensive research computing offerings and services available to you—including access to centralized high performance computing (HPC) clusters, storage, visualization, software, high-level technical and scientific consultations, documentation, and training.


Research Computing Office Hours
RC Office Hours are a great way for you to connect with the RC team for short (~10-15 min) consultations. Office Hours are held every Wednesday from 3 – 4 p.m. ET and Thursday from 11 a.m. – 12 p.m. ET. All current or prospective Explorer users are welcome to join anytime during these hours.
Research Computing Office Hours – Please Note:
Due to Thanksgiving, Research Computing will not be holding Office Hours on Wednesday, November 26 or Thursday, November 27.

News from the AVP
- Research Computing Operational Improvements on the Explorer ClusterDear Research Computing Community: As you may have noticed, our newly added H200 GPUs on the Explorer cluster have quickly become highly popular and heavily utilized. We are excited that these state-of-the-art GPUs…
- Research Computing Infrastructure EnhancementsDear Research Computing Community: As announced in our prior RC communications and April monthly newsletter, we are excited to share with you that our Research Computing team has significantly enhanced…
News from Research Computing
- Debugging Unexpected Job Termination on ExplorerSometimes, jobs may seem to be terminated without any apparent reason. However, there are steps we can take to identify the actual reason. Here, we’ll discuss two possible reasons: OOM…
- Parallelizing Multiple Small JobsIndependent, parallel jobs can be run on an HPC cluster using srun commands inside sbatch jobs. SLURM can first allocate resources based on the batch job parameters and then use…
- All Public GPUs Now on ExplorerAll public GPU resources have been moved from Discovery to Explorer. The Explorer Cluster provides a new, more efficient operating system (Rocky Linux 9.3) and state-of-the-art GPU resources for compute-intensive…
How Can Research Computing Support You?
Accelerate your research at any stage by leveraging our online user guides, hands-on training sessions, and one-on-one guidance.


