Storage Services at CHPC - Center for High Performance Computing - The University of Utah
Skip to content
Search Campus
Storage Services at CHPC
CHPC currently offers four different types of storage:
home directories
group space
scratch file systems
, and an a
rchive storage system
. All storage types but the archive storage are accessible from
every
CHPC resource.
Note that the information below is specific for the General Environment. In the Protected
Environment (PE), all four types of storage exist; however, the nature of the storage,
pricing, and policies vary in the PE. See the
Protected Environment Storage Services page
for more details.
For more information on CHPC data policies, including details on current backup policies,
please visit our
File Storage Policies
page.
Please remember that you should always have an additional copy, and possibly multiple
copies, of any critical data on independent storage systems. While storage systems
built with data resiliency mechanisms (such as RAID and erasure coding or other, similar
technologies) allow for multiple component failures, they do not offer any protection
against large-scale hardware failures, software failures leading to corruption, or
the accidental deletion or overwriting of data. Please take the necessary steps to
protect your data to the level you deem necessary.
On this page
The table of contents requires JavaScript to load.
Home directories
By default, each user is provided with a
50 GB home directory
free of charge. To view the current home directory usage and quota status, run the
command
mychpc storage
Home directories are not backed up by default
; important data should be copied to a departmental file server or other locations
as a backup. Some groups may have purchased home directory space with CHPC-managed
backups. Please confirm with your PI or the CHPC.
Home directories can be mounted on local desktops. See the
Data Transfer Services
page for information on mounting CHPC file systems on local machines.
Quota enforcement policies
The 50 GB quota on this directory is enforced.  There is a two-level quota system
in place.  Once a user has exceeded the 50 GB quota, they have 7 days to clean up
their space such that they are using less than 50 GB. If they do not, after 7 days
they will no longer be able to write or edit any files until files are cleaned up
and the home directory is under the quota.
If your home directory grows to 75 GB, you will no longer be able to write any files
until files are cleaned up and your home directory is under the quota.
When over quota,
you will not be able to start a FastX or OnDemand session
, as those tasks write to your home directory, but an SSH session can be used to connect
to the CHPC and free up space.
To find what files are taking up space in your directory, the command
ncdu
will show you the size of each directory; you can run this from your home directory
to see what is taking up space. If your quota is more than 50 GB, it is possible your
home directory is in a larger, shared home space; see below for more information about
shared home spaces.
The output from the command
mychpc storage
only updates every hour or so. Consequently, output from the command will be outdated
immediately following the deletion of files in your home directory. The command
ncdu
will provide a more accurate storage usage immediately.
Purchases of larger home directory space
CHPC also allows CHPC PIs to buy larger home directory storage at a price based on
hardware cost recovery. The hardware for the new home directory solution was originally
purchased at $900/TB and was put into service in May 2022, described in the
Spring 2022 newsletter
The cost of home directory storage now has a
prorated cost of
$360/TB
(updated on 05/02/2025) for the remaining lifetime. The current warranty expires May
2027 and the prorated price is updated every May for the remaining lifetime of the
storage. Once purchased, the home directories of all members of the PI's group will
be provisioned in this space.
Purchase of home directory space includes the cost of the space on the VAST storage
system along with backup of this space. The backup will be to the CHPC object storage,
pando, and will be a weekly full backup with nightly incremental backups, with a two
week retention window.
If you are interested in this option, please contact us by emailing
helpdesk@chpc.utah.edu
to discuss your storage needs.
Group-level storage
CHPC PIs can purchase general environment group level file
storage at the TB-level. CHPC purchases the hardware for this storage in bulk and
then sells it to individual groups in TB quantities, so, depending on the amount of
group storage space you are interested in purchasing, CHPC may have the storage to
meet your needs on hand.
Group spaces are not backed up by default
; important data should be copied to a departmental file server or other locations
as a backup. Some groups may have purchased space with CHPC-managed backups. Please
confirm with your PI or the CHPC.
The current rates for group space are:
$150/Tb
without backups
$450/Tb
with backups
(original + 1 full copy)
As of March 2026, storage vendors have limited capacity available to sell. The lead
time on storage purchases—particularly for large purchases of multiple terabytes—is
now several months. CHPC staff are watching the situation closely and working to acquire
storage as quickly as possible. If you have questions or concerns, please contact
us.
Storage purchases are a one-time purchase for five years. If interested in purchasing
group-level storage, p
lease contact at
helpdesk@chpc.utah.edu
If interested, a more
detailed description of this storage offering
is available.
Current backup policies can be found at
File Storage Policies
. The CHPC also provides information on a number of user-driven alternative to our
group level storage service: see the
User Driven Backup Options
section below for information.
Group directories can be mounted on local desktops. See the
Data Transfer Services
page for information on mounting CHPC file systems on local machines.
Group space is on shared hardware and is not designed for running jobs that have high
IO requirements
. Running such jobs on group space can bog down the system and cause issues for other
groups on the hardware. Please refer to the
scratch file system
information below.
For group level storage options (project space) in the protected environment, please
visit this link
Scratch file systems
scratch space
is a high-performance temporary file system for files being accessed and operated
on during jobs. It is recommended to transfer data from home directories or group
spaces to scratch when running jobs, as the scratch systems are designed for better
performance and this prevents home and group spaces from getting bogged down
Scratch space is generally not intended for long-term file storage.
Scratch spaces are not backed up. Unless otherwise noted, files in scratch spaces
are deleted automatically after a period of inactivity.
The
Protected Environment
has its own scratch space,
detailed on a separate page
If you have questions about using the scratch file systems or about IO-intensive jobs,
please contact
helpdesk@chpc.utah.edu
General-purpose scratch spaces
/scratch/general/nfs1 - a 595 TB NFS system accessible from all general environment
CHPC resources
Files in this space are deleted automatically after a period of time, and they are
not backed up
/scratch/general/vast - 1 PB  file system available from all general environment CHPC
resources
Files in this space are deleted automatically after a period of time, and they are
not backed up
There is a per-user quota of 50 TB on this scratch file system
Scratch space for AI-related research
new
/scratch/rai/vast1
- 1.8 PB  file system, funded by the
One-U Responsible AI Initiative
for AI-related research; available from all general environment CHPC resources
Files in this space are not deleted automatically, but they are not backed up
There is a per-group quota of 50 TB and 100 million inodes on this scratch file system
Access is granted to researchers with AI-related research; please contact us at
helpdesk@chpc.utah.edu
to request access
Temporary file systems
Temporary file systems are for short-term storage during (but not after) a job; they
should not be used for long-term storage. Files in temporary file systems are
removed automatically
Temporary file systems are not backed up.
/scratch/local
Each node on the cluster has a local disk mounted at /scratch/local. This disk can
be used for storing intermediate files during calculations. Because it is local to
the node, this will have lower-latency file access. However, be aware that these files
are only accessible on the node and cannot be accessed off of the node unless the
files are moved to another shared file system (home, group, scratch) before the job
completes.
Access permissions to /scratch/local have been set such that
users cannot create directories in the top level /scratch/local directory
. Instead, as part of the slurm job prolog (before the job is started), a job level
directory, /scratch/local/$USER/$SLURM_JOB_ID
, will be automatically created.  Only the job owner will have access to this directory.
At the end of the job, in the slurm job epilog, this job level directory will be removed.
All slurm scripts that make use of /scratch/local must be adapted to accommodate this
change. Additional updated information is provided on the
CHPC Slurm page
/scratch/local is software-encrypted. Each time a node is rebooted, this software
encryption is re-setup from scratch, purging anything within the content of this space.
There is also a cron job in place to scrub /scratch/local of
content that has not been accessed for over 2 weeks. This scrub policy can be adjusted
on a per host basis. A node owned by a group can opt to have us disable this and it
will not run on that host.
/tmp and /var/tmp
Linux defines temporary file systems at /tmp or /var/tmp. CHPC cluster nodes set up
temporary file systems as a RAM disk with limited capacity. All interactive and compute
nodes also have a spinning disk local storage at /scratch/local.
If a user program is known to need temporary storage, it is advantageous to define
the location of the temporary storage by setting the environmental variable
TMPDIR
to point to /scratch/local as /scratch/local disk drives range from 40 to 500 GB
depending on the node,  much more than the default /tmp size.
Archive storage
Archive storage is not backed up.
(Notably, archive storage is used for CHPC-managed backups from other storage systems.
That is, if a group has purchased home or group space with CHPC-managed backups and
would like an additional backup copy, we recommend looking for other options for data
resiliency.)
Pando
CHPC uses
Ceph
, an
object-based archival storage system
developed at UC Santa Cruz. We are offering an 6+3
erasure coding
configuration, allowing for the $150/TB price for five years. In alignment with our
current group space offerings, we will operate this space in a condominium-style model
by reselling this space in TB chunks.
The current rate for archive storage is
$150/Tb
for five years.
One of the key features of the archive system is that users manage their archive directly
by moving
data in and out of the archive storage as needed. This space is a standalone entity
and is not mounted on other CHPC resources.
To transfer data from other CHPC resources (or local resources) to the Pando archive,
a number of methods are available:
Pando is available as an endpoint on Globus (see
the Data Transfer page
for more information).
Ceph presents the storage as an S3 endpoint, which allows access via applications
that use Amazon's S3 API.
GUI tools such as Cyberduck or
transmit
(for Mac) as well as command-line tools such as s3cmd and
rclone
can be used to move the data.
Pando is currently the backend storage used for CHPC-provided automatic backups (e.g.,
home or group spaces that are backed up). As such, groups looking for additional data
resiliency that already have CHPC-provided backups should look for other options.
See
User Driven Backup Options
below.
It should also be noted that this archive storage space is for use in the General
Environment, and is not for use with regulated data; there is a separate
archive space in the Protected Environment
State-wide archive storage
new
With the aid of an NSF Campus Cyberinfrastructure (CC*) award, the CHPC has built
a prototype state-wide archive system. This system provides infrastructure that allows
researchers to satisfy the data sharing, resiliency, and retention requirements placed
on published and complete datasets. The system will also provide an opportunity for
researchers to explore sharing datasets with national data-sharing platforms (e.g.,
National Data Platform, NDP, and Open Science Data Federation, OSDF) to promote caching
datasets close to computational resources. This system can only house open data; that
is, it may only be used for data without security regulations or restrictions.
This system is composed of two pieces:
A disk-based object store built on the Ceph software stack and located at the Downtown
Data Center (DDC)
A tape-based library located at the Tonaquint Data Center (TDC) in St. George
Users interact with the object store at the DDC, called ARC-A (Archive-A), copying
their datasets to this system via an S3 interface with tools like Rclone and Globus.
The datasets are then automatically replicated to the Spectralogic BlackPearl tape
library system, called ARC-B, where two copies are written. ARC-A is 2.8 PB and ARC-B
is 7.2 PB in capacity.
If you have large datasets that (a) have accessibility requirements, (b) are associated
with a publication, or (c) have datasets that are—or will be—broadly used by other
institutions, we encourage you to apply for an allocation of space. Because this system
was created with grant funds, it will be provided free of charge for the life of the
grant. Beyond that window of time, there will be a charge per terabyte, which we are
working to determine. We are limiting allocations of space to 50 TB per group. To
apply for an allocation of space on this system, please
. As part of the application process, we ask that you provide a coarse manifest of
what datasets you plan to store on the archive, along with a description of broader
significance of your data, the capacity you require, and the duration of any applicable
data retention requirements.
User-driven backup options
Campus level options for a backup location include
Box
and Microsoft
OneDrive
.  Note: There is a
UIT Knowledge Base article
with information on the suitability of the campus level options for different types
of data (public/sensitive/restricted).
Please follow  these university guidelines to determine a suitable location for your
data.
Owner backup to University of Utah
Box
: This is an option suitable for sensitive/restricted data. See the link to get more
information about the limitations.  If using rclone, the credentials expire and have
to be reset periodically.
Owner backup to University of Utah Microsoft
OneDrive
: As with box, this option suitable for sensitive/restricted data. See the link above
to get more information about the limitations.
Owner backup to CHPC archive storage (Pando in the General Environment and Elm in
the Protected Environment)
: This choice, mentioned in the archive storage section above, requires that the group
purchase the required space on CHPC's archive storage options.
Owner backup to other storage external to CHPC
Some groups have access to other storage resources, external to the CHPC, whether
at the University of Utah or at other sites. The tools that can be used  for doing
this are dependent on the nature of the target storage.
There are a number of tools, mentioned on our
Data Transfer Services
page, that can be used to transfer data for backup. The tool best suited for transfers
to object storage file systems is
rclone
. Other tools include fpsync, a parallel version of rsync suited for transfers between
typical Linux "POSIX-like" file systems, and
Globus
, best suited for transfers to and from resources outside of the CHPC.
If you are considering a user driven backup option for your data, CHPC staff are available
for consultation at
helpdesk@chpc.utah.edu
Mounting CHPC storage
For making direct mounts of home and group space on your local machine, see the instructions
provided on our
Data Transfer Services
page.