Not logged in
Talk
Contributions
Create account
Log in
Navigation
Main page
About
People
Publications
Teaching
Resources
Research Blog
Wiki Functions
Recent changes
Help
Licensing
User page
Discussion
Edit
View history
Editing
User:Groceryheist/TACC
From CommunityData
Jump to:
navigation
,
search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
[https://tacc.utexas.edu TACC] is the high-performance computing center at the University of Texas. It has several different resources. We are likely to use Stampede 3, which is an HPC resource similar to Hyak. Also, Jetstream 2, an open stack cloud, may be useful for data collection and other tasks that are well suited to a virtual webserver. Stampede 3 complements Hyak in two significant ways. The first is that it is a time-sharing based system instead of a "condo". This means that we have a budget to spend on compute jobs. A major advantage of this model is the ability to run larger jobs that use many nodes. We can start up a spark cluster large enough to fit a dataset in memory. A second advantage is that Stampede 3 has some nodes with GPUs (Intel Ponte Veccio). These GPUs are designed for deep learning and super fast half and single precision floating point calculations. Stampede 3' disadvantage is that jobs have short maximum wall times (24 hrs or 48hrs, depending on the type of node). Jobs that run longer than that need to implement checkpointing. The budget is renewable, but we have to do increasing amounts of paperwork and come under increasing scrutiny as it grows. If we do good work and are good stewards of the resource it should has several different types of nodes. == TACC Filesystems == TACC users will work with 4 distributed filesystems: * '''Home''' is persistent, not fast, and backed up. Each user has a 15GB allocation. This is a good place to install software, code, and configuration. You can refer to your home directory in scripts using the <code>$HOME</code> environment variable. You can quickly navigate to your home directory using the <code>cdh</code> alias. * '''Work''' is persistent and fast. Each ''project'' has a 1TB allocation. This is a good place to store data that you are actively working with and need to persist. Use the <code>$WORK</code> environment variable and the <code>cdw</code> alias. * '''CORRAL''' is persistent, slower, but larger. Also, it is inaccessible from compute nodes. So you need to copy data from corral to work or scratch before working with it. The environment variable is <code>$CORRAL</code> * '''Scratch''' is fast and unlimited, but unaccessed files are automatically removed after 10 days. Use it for intermediate stages in a data analysis pipeline or for data that is too large for work. The environment variable is <code>$SCRATCH</code> and the command is <code>cds</code>. == Running Long Jobs on Stampede 3 == Stampede 3 has a lower maximum wall time compared to Hyak, typically 2 days. This means that if you have a job that requires more time to complete that you need to checkpoint and resume it. The key challenge in checkpoint / resume is to be able to save your program's state and the resume from that state. For example, if you are running an slow iterative algorithm you can save the state after each iteration. Then when your program runs again it can load that state and resume from the last iteration. Depending on your problem and the algorithm you want to run checkpointing might be more or less difficult. Checkpoint/resume is relatively easy to make work with Slurm. Wrap the main part of your program with a check to see if the job is incomplete. Then load the state if it is incomplete and exit otherwise. When your job finishes in an incomplete state have it run a script that submits a new version of it to the slurm scheduler while using Slurms <code>-d;--dependency</code> flag to make sure the new job doesn't start until the current one is totally complete.
Summary:
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see
CommunityData:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)
Tools
What links here
Related changes
User contributions
Logs
View user groups
Special pages
Page information