Research Computing at West Virginia University — WVU-RC 2026.03.15 documentation
Docs
Research Computing at West Virginia University
View page source
Research Computing at West Virginia University
West Virginia University Research Computing (WVU-RC) is a team inside WVU’s
Research Office dedicated to supporting, enabling, and advancing computational research at WVU.
WVU Research Computing maintains a portfolio of infrastructure to support its mission.
We maintain several High-Performance Computing (HPC)
Clusters
, from general-purpose to specialized ones, both on-premises and in the cloud.
WVU Research Computing also provides other services such as a large research data storage facility called
DataDepot
, and a Demilitarized Zone (DMZ) for high-speed data transfers called
WVU Research Exchange (REX)
In addition to maintaining these facilities, WVU Research Computing offers support, consulting, and training
in areas of High-Performance Computing, Data Analysis, Machine Learning, and Parallel Programming.
The table below shows our portfolio of HPC resources (past and present):
HPC Portafolio at WVU Research Computing
Infraestructure/
HPC Clusters
Description
Compute
Nodes
CPU
Cores
Accelerators
GPUs
Harpers Ferry
(in production)
General Purpose HPC Cluster.
CPU Processors: 2X AMD EPYC 9754 128-Core
Servers provisioned: Oct 29, 2025
37
9472
None
Dolly Sods
(in production)
GPU Accelerated HPC Cluster.
CPU Processors: AMD EPYC 7513 32-Core Processor
AMD EPYC 7513 32-Core Processor
37
1248
155 NVIDIA GPUs
A30 (120)
A40 (19)
A100 (16)
WVCTSI Cluster
Secure Cluster
(in production)
HPC Cluster for use with
Protected Health Information (PHI)
HIPAA compliant
320
4 NVIDIA GPUs
Tesla V100S
Thorny Flat
Phase 0
(in production)
General-purpose HPC cluster.
CPU processors: Intel Skylake and Cascade Lake.
Installed at Pittsburgh Supercomputer Center.
Servers provisioned: Dec 14 2017
111
4232
21 NVIDIA GPUs
P6000 (21)
Thorny Flat
Big Mem
(to be relaunched in
2026)
General-purpose HPC cluster.
Intel processors with Skylake and Cascade Lake.
Installed at WVU’s Chemistry Research Laboratory (CRL281)
64
2560
None
Spruce Knob
(to be relaunched in
2026)
General-purpose HPC cluster first commissioned in 2017.
Heterogeneous cluster with Intel processors.
Sandy Bridge, Ivy Bridge, Haswell and Broadwell.
120
3376
14/5 NVIDIA GPUs
Tesla K20m
Tesla K20Xm
GoFirst
(in production)
Virtual Infraestructure running on AWS.
Serves Business Data Analytics (BUDA) program.
Chambers College of Business and Economics.
Mountaineer
(decommissioned in
2018)
First centrally managed HPC Cluster for WVU.
CPUs from Intel Westmere microarchitecture (32 nm).
32
384
None
The contents of this website can be downloaded as a single PDF here:
docs_hpc_wvu.pdf
There are several websites associated with WVU-RC activities, here is a list of
the most relevant ones:
The official webpage in the Research Office portal about the Research Computing Division
WVU Research Computing - Research Office
The legacy documentation was a Wiki website that will continue to be online for a while
WVU Research Computing - Legacy Wiki
The HelpDesk ticket system
WVU Research Computing - HPC HelpDesk
If your research was possible thanks to the use of our clusters, please acknowledge the support using these comments:
For Thorny Flat:
“Computational resources were provided by the WVU Research Computing Thorny Flat HPC cluster, partly funded by NSF OAC-1726534.”
For Dolly Sods:
“Computational resources were provided by the WVU Research Computing Dolly Sods HPC cluster, which is funded in part by NSF OAC-2117575.”
To request help, create a new ticket on the Research Computing HPC HelpDesk web page.
You are welcome to e-mail any member of the WVU-RC team directly, but since we are not always at our desk, the ticket system will guarantee that your support question will be seen by someone currently available.
Main Responsible for Documentation and Scientific Outreach
Guillermo Avendano-Franco
Introduction
Infrastructure and Services
Getting Help
Policies
Purchasing Compute Nodes
Training
Publications
Quick Start
Getting Access
Connect to the cluster (SSH)
UNIX/Linux Command Line Interface
Terminal-based Text Editors (nano)
Data Storage
Software Packages
Workload Manager (SLURM)
File Transfer (Globus)
Web Interface (Open On-Demand)
Basic Usage
Terminal-based Text Editors
Data Storage
Environment Modules
Workload Manager (SLURM)
File Transfer (Globus and SFTP)
Web Interface (Open On-Demand)
Advanced Usage
Singularity Containers
Conda
GPU Computing
Environment Modules
Jupyter Notebooks
HDF5: Hierarchical Data Format
XWindow
Compile Source Code
Scientific Programming
Fortran, C and C++
Python Language
R Language
Julia Language
MATLAB
Perl Language
Parallel Programming: OpenMP
Parallel Programming: MPI
Parallel Programming: OpenACC
Parallel Programming: CUDA
Software Administration
Editing these documents
Installing Packages in User Locations
Linear Algebra
Boost 1.79
Message Passing Interface
HDF5 and NetCDF
Fast Fourier Transforms
Force Field Molecular Dynamics
CHARM++ and NAMD
Density Functional Theory
Big Data
Python 3.9.7
Matlab N-D multithreaded matrix operations (MMX)
Building Julia
Tinker9
Updating NVIDIA Driver and CUDA Toolkit
Domain Specific Details
Engineering: ANSYS Products
Engineering: ANSYS/Forte
Engineering: OpenFOAM 10
LAMMPS
Bioinformatics: Using Bowtie2
Visualization: VisIt
Compiling Planetary Modeling Packages
NAMD
Bioinformatics: Stacks
Electronic Structure: Orca
Clusters Specifications
Mountaineer (2011-2018)
Spruce Knob (2014-2023)
Thorny Flat (2019-)
CTSI HPC Cluster (2021-)
Dolly Sods (2023-)
Harpers Ferry (2025-)
Go First Data-Analytics Cluster
References
Common Unix commands
Linux Commands
Software Centrally Managed
Indices and tables
Index
Module Index
Search Page
US