User:Groceryheist/TACC

From CommunityData

TACC is the high-performance computing center at the University of Texas. It has several different resources. We are likely to use Stampede 3, which is an HPC resource similar to Hyak. Also, Jetstream 2, an open stack cloud, may be useful for data collection and other tasks that are well suited to a virtual webserver.

Stampede 3 complements Hyak in two significant ways. The first is that it is a time-sharing based system instead of a "condo". This means that we have a budget to spend on compute jobs. A major advantage of this model is the ability to run larger jobs that use many nodes. We can start up a spark cluster large enough to fit a dataset in memory. A second advantage is that Stampede 3 has some nodes with GPUs (Intel Ponte Veccio). These GPUs are designed for deep learning and super fast half and single precision floating point calculations. Stampede 3' disadvantage is that jobs have short maximum wall times (24 hrs or 48hrs, depending on the type of node). Jobs that run longer than that need to implement checkpointing.

The budget is renewable, but we have to do increasing amounts of paperwork and come under increasing scrutiny as it grows. If we do good work and are good stewards of the resource it should has several different types of nodes.

TACC Filesystems[edit]

TACC users will work with 4 distributed filesystems:

  • Home is persistent, not fast, and backed up. Each user has a 15GB allocation. This is a good place to install software, code, and configuration. You can refer to your home directory in scripts using the $HOME environment variable. You can quickly navigate to your home directory using the cdh alias.
  • Work is persistent and fast. Each project has a 1TB allocation. This is a good place to store data that you are actively working with and need to persist. Use the $WORK environment variable and the cdw alias.
  • CORRAL is persistent, slower, but larger. Also, it is inaccessible from compute nodes. So you need to copy data from corral to work or scratch before working with it. The environment variable is $CORRAL
  • Scratch is fast and unlimited, but unaccessed files are automatically removed after 10 days. Use it for intermediate stages in a data analysis pipeline or for data that is too large for work. The environment variable is $SCRATCH and the command is cds.

Running Long Jobs on Stampede 3[edit]

Stampede 3 has a lower maximum wall time compared to Hyak, typically 2 days. This means that if you have a job that requires more time to complete that you need to checkpoint and resume it. The key challenge in checkpoint / resume is to be able to save your program's state and the resume from that state. For example, if you are running an slow iterative algorithm you can save the state after each iteration. Then when your program runs again it can load that state and resume from the last iteration. Depending on your problem and the algorithm you want to run checkpointing might be more or less difficult.

Checkpoint/resume is relatively easy to make work with Slurm. Wrap the main part of your program with a check to see if the job is incomplete. Then load the state if it is incomplete and exit otherwise. When your job finishes in an incomplete state have it run a script that submits a new version of it to the slurm scheduler while using Slurms -d;--dependency flag to make sure the new job doesn't start until the current one is totally complete.