Compute Platforms | HPC @ LLNL
Compute Platforms
A summary (or system list) of LC computing platforms is provided in the sortable table below.
NOTE
Click column heading to sort. Click platform name for detailed information. Scroll at bottom of table to see more columns.
Title
Sort descending
Zone
Nodes*
CPU Cores per Node
GPUs per node
Memory per Node (GB)
Clock Speed (GHz)
Peak PFLOPS (CPUs)
Peak PFLOPs (GPUs)
Peak PFLOPS (CPUs+GPUs)
CPU Architecture
GPU Architecture [APU if applicable]
Switch
Vendor
Program(s)
Class
Year Sited
OS
Access‡
Title
Bengal
SCF
1,158
112
256
2.0
8.0
8.0
Intel Sapphire Rapids
Cornelis Networks
Dell
ASC, M&IC
CTS-2
2023
TOSS4
Bengal
Corona
CZ
121
48
256
121 nodes AMD Rome
121 nodes AMD 8xMI50
IB HDR
Penguin
ASC, M&IC, CARES
Other
2019
TOSS 4
Limited
Corona
Dane
CZ
1,544
112
256
2.0
10.7
10.7
Intel Sapphire Rapids
Cornelis Networks
Dell
ASC, M&IC
CTS-2
2023
TOSS4
Dane
El Capitan
SCF
11,520
96
2.0
2,889.2
4th Generation AMD EPYC
CDNA 3 [APU: AMD MI300A]
HPE Slingshot 11
HPE Cray
ASC
ATS-4, CORAL-2
2024
TOSS 4
El Capitan
Jade
SCF
1,302
36
128
2.1
1.6
1.6
Intel Xeon E5-2695 v4
Cornelis Networks Omni-Path
Penguin
ASC
CTS-1
2016
TOSS 4
Jade
Jadeita
SCF
1,270
36
128
2.1
1.5
1.5
Intel Xeon E5-2695 v4
Cornelis Networks Omni-Path
Penguin
ASC
CTS-1
2016
TOSS 4
Jadeita
Magma
SCF
772
96
384
2.3
5.3
5.3
Intel Cascade Lake AP
Cornelis Networks Omni-Path
Penguin
ASC
CTS-1
2020
TOSS 4
Magma
Mammoth
CZ
69
128
2,048
2.3
0.294
0.294
AMD Rome
Cornelis Networks Omni-Path
SuperMicro
ASC, M&IC
CTS-1
2020
TOSS 4
Limited
Mammoth
Matrix
CZ
30
112
504
3.7
0.198
3.8
4.0
Intel(R) Xeon(R) Platinum 8480+
NVIDIA H100
IB
Dell
CTS-2
2025
TOSS 4
Matrix
Mica
SCF
384
36
128
2.1
0.464
0.464
Intel Xeon E5-2695 v4
Cornelis Networks Omni-Path
Penguin
ASC
CTS-1
2017
TOSS 4
Mica
Pinot
SNSI
187
36
128
2.1
0.232
0.232
Intel Xeon E5-2695 v4
Cornelis Networks Omni-Path
Penguin
M&IC
CTS-1
2018
TOSS 4
Pinot
RZAdams
RZ
128
96
512
3.7
0.358
31.7
32.1
4th Generation AMD EPYC
CDNA 3 [APU: AMD MI300A]
HPE Slingshot 11
HPE Cray
ASC
ATS-4
2024
TOSS 4
RZAdams
RZGenie
RZ
48
36
128
2.1
0.058
0.058
Intel Xeon E5-2695 v4
Cornelis Networks Omni-Path
Penguin
ASC
CTS-1
2019
TOSS 4
Limited
RZGenie
RZHound
RZ
386
112
256
2.0
2.7
2.7
Intel Sapphire Rapids
Cornelis Networks
Dell
ASC, M&IC
CTS-2
2023
TOSS 4
RZHound
RZVector
RZ
16
112
504
3.7
Intel(R) Xeon(R) Platinum 8480+
NVIDIA H100
IB
Dell
ASC
CTS-2
2025
TOSS 4
RZVector
RZVernal
RZ
38
64
1.9
0.512
6.8
6.9
AMD Trento
AMD MI-250X
HPE Slingshot 11
HPE Cray
ASC
ATS-4/EA, CORAL-2
2022
TOSS 4
Limited
RZVernal
RZWhippet
RZ
36
112
256
2.0
0.293
0.293
Intel(R) Xeon(R) Platinum 8479, Intel(R) Xeon(R) CPU Max 9480
Cornelis
Dell
ASC
CTS-2
2022
TOSS 4
Limited
RZWhippet
Tenaya
SCF
24
64
512
2.0
0.048
4.3
4.4
AMD Trento
AMD MI-250X
HPE Slingshot 11
HPE Cray
ASC
ATS-4/EA, CORAL-2
2022
TOSS 4
Limited
Tenaya
Tioga
CZ
32
64
512
2.0
0.064
5.8
5.8
AMD Trento
AMD MI-250X
HPE Slingshot 11
HPE Cray
ASC, M&IC
ATS-4/EA, CORAL-2
2022
TOSS 4
Tioga
Tron
SCF
146
32
384
2.9
0.433
0.433
Intel Cascade Lake
Mellanox EDR
Super Micro
ASC, M&IC
CTS, VIS
2020
TOSS 4
Tron
Tuolumne
CZ
1,152
96
2.0
294.2
4th Generation AMD EPYC
CDNA 3 [APU: AMD MI300A]
HPE Slingshot 11
HPE Cray
ASC, M&IC, Bio
ATS-4, CORAL-2
2024
TOSS 4
Tuolumne
Vertex
SCF
40
16
383
2.5
0.028
1.0
1.0
Intel Xeon Silver 4215
NVIDIA Tesla
Supermicro
2022
TOSS 4
Vertex
NOTE
To remove clutter, systems with peak PFLOPS>1 only show one decimal place, whereas those with PFLOPS<1 show 3 decimal places. However, clicking on the system name will bring you to a page where you can see 3 decimal places for all machines.
*Nodes = User-available nodes. For total nodes, see our higher-level overview "
Livermore Computing Systems Summary
."
**Total memory = for most systems, this number is the total CPU node-only memory; for Sierra, Lassen (coming soon) , and RZAnsel, this is CPU+GPU.
Access: Platforms are General Availability unless "access" is marked as "Limited."
Machine Messages of the Day (MOTDs)
System Status
CZ
Compute Platform Status
RZ
Compute Platform Status
CZ
File Systems Status
RZ
File Systems Status
Operating Systems
OS (operating system) types are TOSS 3/TOSS (Tri-Lab Operating System Stack—formerly CHAOS, Clustered High Availability Operating System—derived from Red Hat Linux); SLES/CNK (SuSE Linux Enterprise Server/Compute Node Kernel); and RHEL (Red Hat Enterprise Linux). Click on platform name to see OS information for each platform.
Additional Information
Other useful (and more detailed) information may be found by following these links:
Introduction to Livermore Computing Resources
Testbeds
Compute Platforms with GPUs
Interactive and batch job limits on OCF production machines
LC Systems Summary
About Livermore Computing
Contact Form
Should I call LC Hotline or LivIT?
AI/ML @ LC
Using LLMs in Livermore Computing
LC LLM Model Download Decision Guide
Livermore Computing Resources Overview
Mission Support
Our Users
For Our Center for Bioresilience Users
For Tri-lab Users
Connecting to LANL HPC Platforms
Connecting to Sandia HPC Platforms
Accounts
New Account Setup
Identity Management (IDM)
For Users
Getting Started With IDM
How to Create a New Account
How To Be Added To A Group
How to be Added to a Resource
How To Edit Your Default Shells
For Coordinators and Approvers
Add or Remove Group Members
Add or Remove Role Members
Approving a Request
Request and Manage Defined Roles
Create a New Group
IDM Revalidation (2025)
Accessing LC Systems
Computer Coordinator Roles
Data Sharing in Livermore Computing
Access Control Lists (ACLs) in Livermore Computing
Forms
ASC Dedicated Application Time Request
LDRD Researcher Allocation Request
M&IC Dedicated Application Time Request
Tier Two Quota Increase Request
TotalView Site License Request Form
Unrestricted Hyperion Acknowledgment
Mailing Lists
Policies
Acceptable Use Policy for Licensed Software
Adding a Printer to the LC Printcap
Compute Platform Policies
LC Policies and Procedures
LC's Policy on Minimizing User Impact
LC Policy for Use of /usr/gapps
M&IC Dedicated Run Guidance
Banks & Jobs
Allocations
Banks
Running Jobs
Batch System Primer
Batch System Cross-Reference
Flux Quick Start Guide
Flux Tutorial
LSF User Manual
LSF Quick Start Guide
LSF Commands
Slurm User Manual
Slurm Quick Start Guide
Slurm Commands
Slurm Tutorial
Slurm srun versus IBM CSM jsrun
ASC DAT Request
M&IC DAT Request
Hardware
Compute Platforms
Bengal
Corona
Dane
El Capitan
Jade
Jadeita
Magma
Mammoth
Matrix
Mica
Pinot
Poodle
RZAdams
RZHound
RZGenie
RZVernal
RZWhamo
RZWhippet
Tenaya
Tioga
Tron
RZVector
Tuolumne
Vertex
Compute Platforms with GPUs
File Systems
Parallel File Systems
/usr/gapps File System
File Management
Tri-lab Distance Communication Transfer Tools
Storage Data Transfer Clusters
OSLIC
RZSLIC
CSLIC
Testbeds
Zones (a.k.a. "The Enclave")
Zone Access
MyLC (Lorenz)
CZ Compute Platform Status
RZ Compute System Status
CZ File System Status
RZ File System Status
El Capitan EAS Systems Confluence
Services
Cloud Services
AI/ML Services
LLamaMe
JEDS
LaunchIT
FAQ
Help
Object Storage
Persistent Data Services (PDS)
Embedded Storage with PGVector
MariaDB
MongoDB
MongoDB Containers
MongoDB Python
MySQL
PostgreSQL
RabbitMQ
Redis
PDS Deeper Dive
Workflow Tools
AiiDA
How to Configure AiiDA v1.3
How to Configure AiiDA v1.5.2
How to Configure AiiDA v1.6.4 on maya6
FireWorks
Merlin
Sina
Containers in LC HPC
Green Collaboration Environment
Globus
globftp
Green Data Oasis (GDO)
Usage
Data Custodian
Review and Release of Large Datasets
Terms and Conditions
Guidelines for External Data Transfers
Visualization Services
Web Services
CDash
LC GitLab
Getting Started with LC GitLab
Logging in to LC GitLab from a browser
GitLab Setup from LC
GitLab Setup from a Local Machine
Using a Personal Access Token for Git-over-HTTPS
LC GitLab Duo (Pilot)
Troubleshooting
MyLC (Lorenz)
Orbit and Jupyter Notebooks
JupyterHub: Julia kernel
JupyterHub: Matlab kernel
JupyterHub: R kernel
Software
TOSS: Tri-Lab Operating System Stack
Archival Storage Software
AI/ML Software and Tools
Using TensorBoard on Login Nodes
Containers in LC HPC
Background
Gotchas
How to Build a Container
How to Interact with a Container
Troubleshooting
Using Containers in CI Pipelines
Data Management Tools
Using Hopper
Development Environment Software
Allinea DDT
Archer
Compilers
Gprof
File Editors
FPChecker
HPC Toolkit
Intel Advisor
Intel Inspector
Intel VTune Amplifier
memP: Parallel Heap Profiling
MUST
mpiP
NINJA
PAPI: Performance Application Programming Interface
Pruners
Python
Debugging Python
Python Site Packages
ReMPI
Spack
STAT: Stack Trace Analysis Tool
TAU: Tuning and Analysis Utilities
TotalView Debugger
TotalView Site License Request
Vampir/VampirServer
Valgrind Memcheck Tool
Mathematical Software
ATLAS
BLAS
FFTW
GNU Scientific Library
Interactive Math Tools
LAPACK
Largest and Smallest Numbers
LIBM
LINMath
MASS
MKL
MSSL and mssl3
ODEPACK
PETSc
PMATH
SLATEC
ScaLAPACK
Modules and Software Packaging
GitHub
Open Source Software
RADIUSS
Visualization Software
EnSight
GMT
gnuplot
Grace
IDL
NCAR
NICE DCV
ParaView
SM Tools
Tecplot 360
Visit
VMD
VNC: RealVNC | VNC Viewer
Documentation
Policies
Tutorials
Introduction to Parallel Computing Tutorial
Livermore Computing Resources and Environment
Commodity Clusters Overview Part One
Commodity Clusters Overview Part Two
srun --auto-affinity
srun --multi-prog
Commodity Tutorial Exercises
MPI Tutorial
OpenMP Tutorial
Posix Threading (aka, pthreads) Tutorial
Slurm and Moab Tutorial
Slurm and Moab Exercise
TotalView Tutorial
TotalView Built-in Variables and Statements
TotalView Part 2: Common Functions
TotalView Part 3: Debugging Parallel Programs
Tutorial Evaluation Form
Livermore Computing PSAAP IV Quick Start Guide
User Guides
Using Containers on LC HPC Systems
Background
How to Build a Container
How to Interact with a Container
Gotchas
Troubleshooting
Using Containers in CI Pipelines
Using El Capitan Systems
QuickStart
Pro Tips
C++ Code Examples
Fortran Code Examples
PyTorch on AMD Systems Quickstart Guide
Explicit Paths Build Example
Flux Script Examples
Announcements and Presentations
Known Issues
Hardware Overview
Compilers and User Environment
LC Magic Modules Guide
Cray Modules Guide
Spack Guide
MPI Overview
GPU Programming
Running Jobs with Flux and MPI Bind
Spindle and Library Loading
Debugging Tools
Performance Tools
Math Libraries
File Systems and Rabbits
Using FTP
Using Hopper
Hopper Help
Using HSI
Using HTAR
Using LC Archival Storage
Using LC File Interchange Service (FIS)
FIS: DC Support for Secure-to-Open Transfers
Quick Start Guide: Hopper and FIS
Using LC File Systems
Using LC Print Services
Using NFT
Using PyTorch in LC
PyTorch on AMD Systems Quickstart Guide
PyTorch on Corona Quickstart Guide
PyTorch on CPU Systems Quickstart Guide
PyTorch on NVIDIA GPU Systems Quickstart Guide
PyTorch via WEAVE Quickstart Guide
Jupyter Quickstart Guide
Accessing LC Systems
Developer Docs (dev.llnl.gov)
Updates & Events
HPC News
Training Events
Technical Bulletins Catalog
User Meeting Presentation Archive
US