Editing CommunityData:Hyak
From CommunityData
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token which you will need as part of [[CommunityData:Hyak setup|getting setup]]. The following links will be useful. | |||
* [[CommunityData:Hyak setup]] | |||
* [[CommunityData:Hyak Mox migration]] | |||
* [[CommunityData:Hyak | |||
* [[CommunityData:Hyak software installation]] | * [[CommunityData:Hyak software installation]] | ||
There are a number of other sources of documentation | There are a number of other sources of documentation: | ||
* [http://students.washington.edu/hpcc/using-hyak/information-for-beginner-users/slides-from-training-sessions/ Slides from the UW HPC Club] | |||
* [http://wiki.hyak.uw.edu Hyak User Documentation] | * [http://wiki.hyak.uw.edu Hyak User Documentation] | ||
== Setting up SSH == | == Setting up SSH == | ||
Line 39: | Line 32: | ||
=== X11 forwarding === | === X11 forwarding === | ||
You may also want to add these two lines to your Hyak <code>.ssh/config</code> (indented under the line starting with "Host"): | You may also want to add these two lines to your Hyak <code>.ssh/config</code> (indented under the line starting with "Host"): | ||
Line 57: | Line 48: | ||
It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a [https://itconnect.uw.edu/security/uw-netids/2fa/ 2-factor authentication request]. | It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a [https://itconnect.uw.edu/security/uw-netids/2fa/ 2-factor authentication request]. | ||
== Setting | == Setting Up Hyak environment == | ||
Everybody who uses Hyak as part of our group '''must''' add the following line to their <code>~/.bashrc</code> file on Hyak: | Everybody who uses Hyak as part of our group '''must''' add the following line to their <code>~/.bashrc</code> file on Hyak: | ||
Line 65: | Line 56: | ||
</source> | </source> | ||
This line will load scripts that will initialize a good data science environment and set the [[:wikipedia:umask|umask]] so that the files and directors you create are readable by others in the group. '''Please do this immediately before you do any other work on Hyak.''' When you are done, you can reload the shell by logging out and back into Hyak or by running <code lang="bash">exec bash</code>. | |||
This line will load scripts that will initialize a good data science environment and set the [[:wikipedia:umask|umask]] so that the files and | |||
== Using the CDSC Hyak Environment == | == Using the CDSC Hyak Environment == | ||
Line 73: | Line 62: | ||
By default you have access to a home directory with a relatively small quota. There are several dozen terabytes of CDSC-allocated storage in <code>/gscratch/comdata/</code> and you should explore that space. Typically we download | By default you have access to a home directory with a relatively small quota. There are several dozen terabytes of CDSC-allocated storage in <code>/gscratch/comdata/</code> and you should explore that space. Typically we download | ||
large datasets to <code>/gscratch/comdata/raw_data</code> (see [[# | large datasets to <code>/gscratch/comdata/raw_data</code> (see [[#Downloading new datasets|downloading new datasets]] below), processed data in <code>/gscratch/comdata/output</code>, and personal workspaces with the need for large data storage in <code>/gscratch/comdata/users/'''<YOURNETID>'''</code>. | ||
=== Basic Commands === | === Basic Commands === | ||
Line 98: | Line 87: | ||
Displays jobs by members of the group. | Displays jobs by members of the group. | ||
Read the files in <code>/gscratch/comdata/env</code> to see how these commands are created | Read the files in <code>/gscratch/comdata/env</code> to see how these commands are created as well as other features not documented here. | ||
=== Anaconda === | === Anaconda === | ||
Line 104: | Line 93: | ||
We recently switched to using Anaconda to manage Python on Hyak. Anaconda comes with the `conda` tool for managing python packages and versions. Multiple python environments can co-exist in a single Anaconda installation, this allows different projects to use different versions of Python or python packages, which can be useful for maintaining projects that use old versions. | We recently switched to using Anaconda to manage Python on Hyak. Anaconda comes with the `conda` tool for managing python packages and versions. Multiple python environments can co-exist in a single Anaconda installation, this allows different projects to use different versions of Python or python packages, which can be useful for maintaining projects that use old versions. | ||
By default, our shared setup loads a conda environment called `minimal_ds` that provides recent versions of python packages commonly used in data science workflows. This is probably a good setup for most use-cases, and allows everyone to use the same packages, but it can be even better to create different environments for each project. See the [https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands anaconda documentation for how to create an environment] | By default, our shared setup loads a conda environment called `minimal_ds` that provides recent versions of python packages commonly used in data science workflows. This is probably a good setup for most use-cases, and allows everyone to use the same packages, but it can be even better to create different environments for each project. See the [https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands anaconda documentation for how to create an environment]. | ||
=== SSH into compute nodes === | === SSH into compute nodes === | ||
The [https://wiki.cac.washington.edu/display/hyakusers/Hyak_ssh hyak wiki] has instructions for how to enable ssh within hyak. Reproduced below: | The [https://wiki.cac.washington.edu/display/hyakusers/Hyak_ssh hyak wiki] has instructions for how to enable ssh within hyak. Reproduced below: | ||
<blockquote> | |||
You should be able to ssh from the login node to a compute node without giving a password. If it does not work then do below steps: | You should be able to ssh from the login node to a compute node without giving a password. If it does not work then do below steps: | ||
1) ssh-keygen | |||
Press enter for each question. This will ensure default options. | |||
2) cd .ssh | |||
3) cat id_rsa.pub >> authorized_keys | |||
</blockquote> | |||
== Running Jobs on Hyak == | == Running Jobs on Hyak == | ||
Line 131: | Line 124: | ||
The Slurm scheduler provides a command called [https://slurm.schedmd.com/scancel.html scancel] to terminate jobs. For example, you might run <tt>queue_state</tt> from a login node to figure out the ID number for your job (let's say it's 12345), then run <tt>scancel --signal=TERM 12345</tt> to send a SIGTERM signal or <tt>scancel --signal=KILL 12345</tt> to send a SIGKILL signal that will bring job 12345 to an end. | The Slurm scheduler provides a command called [https://slurm.schedmd.com/scancel.html scancel] to terminate jobs. For example, you might run <tt>queue_state</tt> from a login node to figure out the ID number for your job (let's say it's 12345), then run <tt>scancel --signal=TERM 12345</tt> to send a SIGTERM signal or <tt>scancel --signal=KILL 12345</tt> to send a SIGKILL signal that will bring job 12345 to an end. | ||
=== | === Parallel R === | ||
The nodes on | The nodes on Hyak have 28 CPU cores. These may help in speeding up your analysis ''significantly''. If you are using R functions such as <code>lapply</code>, there are parallelized equivalents (e.g. <code>mclappy</code>) which can take advantage of all the cores and give you a 2800% boost! However, something to be aware of here is your code's memory requirement—if you are running 28 processes in parallel, your memory needs can also go up to 28x, which may be more than the ~200GB that the <code>big_machine</code> node will have. In such cases, you may want to dial down the number of CPU cores being used—a way to do that globally in your code is to run the following snippet of code before calling any of the parallelized functions. | ||
<source lang="r"> | <source lang="r"> | ||
Line 144: | Line 135: | ||
More information on parallelizing your R code can be found in the [https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf <code>parallel</code> package documentation]. | More information on parallelizing your R code can be found in the [https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf <code>parallel</code> package documentation]. | ||
<!-- The hyak machines have 16 cpu cores. The Mox machines will have 28! Running your program on all the cores can speed things up a lot! We make heavy use of R for building datasets and for fitting models. Like most programming languages, R uses only one cpu by default. However, for typical computation-heavy data science tasks it is pretty easy to make R use all the cores. | |||
For fitting models, the R installed in Gentoo should use all cores automatically. This is thanks to OpenBlas, which is a numerical library that implements and parallelizes linear algebra routines like matrix factorization, matrix inversion, and other operations that bottleneck model fitting. | |||
However, for building datasets, you need to do a little extra work. One common strategy is to break up the data into independent chunks (for example, when building wikia datasets there is one input file for each wiki) and then use <code>mcapply</code> from <code>library(parallel)</code> to build variables from each chunk. Here is an example: | |||
library(parallel) | |||
options(mc.cores=detectCores()) ## tell R to use all the cores | |||
mcaffinity(1:detectCores()) ## required and explained below | |||
library(data.table) ## for rbindlist, which concatenates a list of data.tables into a single data.table | |||
## imagine defining a list of wikis to analyze | |||
## and a function to build variables for each wiki | |||
source("wikilist_and_buildvars") | |||
dataset <- rbindlist(mclapply(wikilist,buildvars)) | |||
mcaffinity(rep(1,detectCores())) ## return processor affinities to the status preferred by OpenBlas | |||
A working example can be found in the [[Message Walls]] git repository. | |||
<code>mcaffinity(1:detectCores())</code> is required for the gentoo R <code>library(parallel)</code> to use multiple cores. The reason is technical and has to do with OpenBlas. Essentially, OpenBlas changes settings that govern how R assigns processes to cores. OpenBlas wants all processes assigned to the same core, so that the other cores do not interfere with it's fancy multicore linear algebra. However, when building datasets, the linear algebra is not typically the bottleneck. The bottleneck is instead operations like sorting and merging that OpenBlas does not parallelize. | |||
The important thing to know is that if you want to use mclapply, you need to do <code>mcaffinity(1:detectCores())</code>. If you want to then fit models you should do <code>mcaffinity(rep(1,detectCores())</code> so that OpenBlas can do its magic. --> | |||
=== Using the Checkpoint Queue === | === Using the Checkpoint Queue === | ||
Line 149: | Line 167: | ||
Hyak has a special way of scheduling jobs using the '''checkpoint queue'''. When you run jobs on the checkpoint queue, they run on someone else's hyak node that they aren't using right now. This is awesome as it gives us a huge amount of free (as in beer) computing. But using the checkpoint queue does take some effort, mainly because your jobs can get killed at any time if the owner of the node checks it out. So if you want to run a job for more than a few minutes on the checkpoint queue it will need to be able to "checkpoint" by saving it's state periodically and then restarting. | Hyak has a special way of scheduling jobs using the '''checkpoint queue'''. When you run jobs on the checkpoint queue, they run on someone else's hyak node that they aren't using right now. This is awesome as it gives us a huge amount of free (as in beer) computing. But using the checkpoint queue does take some effort, mainly because your jobs can get killed at any time if the owner of the node checks it out. So if you want to run a job for more than a few minutes on the checkpoint queue it will need to be able to "checkpoint" by saving it's state periodically and then restarting. | ||
This would be a pain to do manually, fortunately, we have <code>[http://dmtcp.sourceforge.net/FAQ.html dmtcp] </code> which can automatically checkpoint and resume most programs. | |||
Nate's working got dmtcp working for arbitrary scripts, and also with wikiq using parallel_sql. | |||
dmtcp 3.0 is installed on Mox. | |||
This will make more sense if you know that dmtcp works by starting a '''coordinator''' process which is responsible for pausing and saving the checkpointed process. A [https://hpcc.usc.edu/support/documentation/checkpointing/ tutorial on dmtcp with slurm from USC] has a bash function for starting the coordinator called <code>start_dmtcp_coordinator</code>. Nate added this function to the shared .bashrc. So it should be available in your environment on Mox. | |||
==== Starting a checkpoint queue job ==== | ==== Starting a checkpoint queue job ==== | ||
Line 159: | Line 185: | ||
#SBATCH --partition=ckpt | #SBATCH --partition=ckpt | ||
== | You'll might have other stuff in your SBATCH script to request a certain number of cores or memory. Those will matter when we run <code>wikiq</code> below, but here they can be whatever they would be if you were running an <code>sbatch</code> job on one of our machines. The next thing you need to do specifically for a <code>ckpt</code> job is to run <code>start_coordinator</code>. This function takes care of making sure that we start a coordinator using the right set of ports and temporary files. We still need to pass in the '''interval''' that we want checkpoints. The bigger this interval the faster your job will run but the more work will be lost when it's interrupted. | ||
start_dmtcp_coordinator -i 600 #checkpoint every 10 minutes | |||
Next you need to run your job in a special way so that it is managed by <code>dmtcp</code> and restarted if it gets interrupted. | |||
# The restart script is created by dmtcp_launch after initialization | |||
if [ -x dmtcp_restart_script.sh ]; then | |||
bash dmtcp_restart_script.sh | |||
else | |||
# On first pass, run program under DMTCP | |||
dmtcp_launch --rm $your_script.sh # must run interpreter for scripts | |||
fi | |||
This works because <code>dmtcp_restart_script.sh</code> is created when you launch your job using <code>dmtcp_launch</code>. If that script exists your job should run it instead of your job. | |||
There are options that you can pass to <code>dmtcp_launch</code> that can be important. In particular <code>--checkpoint-open-files</code> and <code> --allow-file-overwrite </code> modify how IO is checkpointed. | |||
==== Running wikiq with dmtcp and parallel_sql ==== | |||
To run wikiq with parallelsql the following need to be arranged: | |||
# A shell script for each dumpfile that makes a workspace for <code>dmtcp</code> to keep it's data and restart script. | |||
# These shell scripts loaded in <code>parallel sql</code>. | |||
# A <code>sbatch</code> script that gets a checkpoint node and starts running jobs from <code>parallel_sql</code>. | |||
# You need to restart jobs that get interrupted using parallel sql. | |||
Nate made a python script that generates the scripts and makes a file with all the scripts. Notice that each dumpfile gets a script, it's own checkpoint directory, and a line in <code>wikiq_parallel_jobs.sh</code> | |||
<syntaxhighlight lang='python'> | |||
#!/usr/bin/env python3 | |||
from os import path | |||
import os | |||
import stat | |||
import glob | |||
archives = glob.glob("/gscratch/comdata/raw_data/wikia_dumps/2010-04-mako/*.xml.7z") | |||
scripts_dir = '/gscratch/comdata/users/nathante/wikiq_parallel_scripts' | |||
output_dir = '/gscratch/comdata/users/nathante/wikiq_output' | |||
checkpoint_dir = '/gscratch/comdata/users/nathante/wikiq_checkpoint' | |||
if not path.isdir(scripts_dir): | |||
os.mkdir(scripts_dir) | |||
if not path.isdir(output_dir): | |||
os.mkdir(output_dir) | |||
script ="""#!/bin/bash | |||
mkdir -p {0} | |||
cd {0} | |||
start_dmtcp_coordinator -i 60 #checkpoint every 20 minutes | |||
if [ -x dmtcp_restart_script.sh ]; then | |||
bash dmtcp_restart_script.sh | |||
else | |||
# On first pass, run program under DMTCP | |||
dmtcp_launch --rm {1} | |||
fi | |||
""" | |||
= | with open("wikiq_parallel_jobs.sh",'w') as calls: | ||
for dumpfile in archives: | |||
wikiq_base_call = f"wikiq -u -o {output_dir} {dumpfile}" | |||
wikiq_call = wikiq_base_call | |||
wiki = path.split(dumpfile)[1] | |||
wikiq_script = script.format( path.join(checkpoint_dir,wiki), wikiq_call) | |||
= | script_file = path.join(scripts_dir, wiki + '.sh') | ||
with open(script_file,'w') as of: | |||
of.write(wikiq_script) | |||
os.chmod(script_file,os.stat(script_file).st_mode | stat.S_IEXEC) | |||
calls.write(script_file) | |||
calls.write('\n') | |||
</syntaxhighlight> | |||
== | We also need an sbatch script as <code>parallel_sql_job.sh</code>. | ||
=== | <syntaxhighlight lang='bash'> | ||
#!/bin/bash | |||
## parallel_sql_job.sh | |||
#SBATCH --job-name=wikiq_dmtcp | |||
## Allocation Definition | |||
#SBATCH --account=comdata-ckpt | |||
#SBATCH --partition=ckpt | |||
## Resources | |||
## Nodes. This should always be 1 for parallel-sql. | |||
#SBATCH --nodes=1 | |||
## Walltime (12 hours) | |||
#SBATCH --time=12:00:00 | |||
## Memory per node | |||
#SBATCH --mem=100G | |||
module load parallel_sql | |||
#Put here commands to load other modules (e.g. matlab etc.) | |||
#Below command means that parallel_sql will get tasks from the database | |||
#and run them on the node (in parallel). So a 16 core node will have | |||
#16 tasks running at one time. | |||
parallel-sql --sql -a parallel --exit-on-term | |||
</syntaxhighlight> | |||
Next load the scripts into <code>parallel_sql</code> | |||
module load parallel_sql | |||
cat wikiq_parallel_jobs.sh | psu --load | |||
We can now fire up a whole bunch of checkpoint nodes. The limit is technically 2000! But let's just ask for 10 nodes :) | |||
for job in $(seq 1 10); do sbatch parallel_sql_job.sh; done | |||
If our jobs get interrupted we'll need to run <code> psu --reset-slurm </code> to set them back into '''avail''' state. We can run a little script running on a login node to do this automatically every minute or so. | |||
= | <syntaxhighlight lang='python'> | ||
#!/usr/bin/env python3 | |||
## auto_reset_psu.py | |||
import time | |||
import subprocess | |||
running = subprocess.run(["psu", "--show-running"], universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) | |||
print(running) | |||
while hasattr(running, 'stdout') and len(running.stdout) > 0: | |||
subprocess.run(["psu","--reset-slurm"]) | |||
time.sleep(60) | |||
running = subprocess.run(["psu", "--show-running"], stdout=subprocess.PIPE) | |||
</syntaxhighlight> | |||
That's it! Unleash the power of the checkpoint queue! Reach out to Nate if you try this and have problems or if you have any questions! |