Editing CommunityData:Hyak tutorial

From CommunityData

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 20: Line 20:
Getting familiar with the MOX scheduler:
Getting familiar with the MOX scheduler:


* Check out UW-IT's detailed documentation on using the [https://wiki.cac.washington.edu/display/hyakusers/Mox_scheduler Hyak wiki]
* Check out UW-IT's detailed documentation on using the [https://wiki.cac.washington.edu/display/hyakusers/Mox_scheduler]
* For reference, the system that UW uses is called [https://slurm.schedmd.com/documentation.html Slurm] and you can find lots of other information on it online.
* For reference, the system that UW uses is called [https://slurm.schedmd.com/documentation.html Slurm] and you can find lots other information on it online.
* Some useful commands are:
* Some useful commands are:
* <code>sinfo -p comdata</code> — information about our allocation
* <code>sinfo -p comdata</code> — information about our allocation
Line 30: Line 30:
Running interactive jobs is relatively straight forward:
Running interactive jobs is relatively straight forward:


# Run [https://linux.die.net/man/1/screen screen] or [https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/ tmux] to maintain connections over time ([[CommunityData:tmux|CDSC tmux cheatsheet]])
# Run [https://linux.die.net/man/1/screen screen] or [https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/ tmux] to maintain connections over time
# We have four pre-defined ways to check out nodes using aliases in the shared bashrc:
# Four ways to check out nodes:
:* <code>int_machine</code> — interactive machine (cpu cores of this machine are shared with the group, but memory you allocate will be dedicated to you) '''[USE THIS FIRST!]'''.
:* <code>int_machine</code> — interactive machine (shared with the group) '''[USE THIS FIRST!]'''
:* <code>any_machine</code> — dedicated interactive machine
:* <code>any_machine</code> — dedicated interactive machine
:* <code>big_machine</code> — dedicated interactive machine with large amounts of memory
:* <code>big_machine</code> — dedicated interactive machine with large amounts of memory
:* <code>build_machine</code> — interactive machine with an Internet connection for building R modules and so on
:* <code>build_machine</code> — interactive machine with an Internet connection for building R modules and so on
=== Solving common mox node woes ===
==== I'm getting errors about not being able to access the Internet from my node ====
Only machines using the build_machine profile have Internet access. You can install new software using a build node and it'll be immediately available on other nodes.
==== It looks like there are no nodes available? ====
# Try the int_machine
# Try a shorter lease time. When you check out an interactive node with <code>any_machine</code>, for example, you're essentially leasing it for some amount of time, and if that time overlaps with a scheduled Hyak maintenance, your command will hang in the terminal. To change the lease time, first see the contents of our alias command by typing <code>which <node-alias></code>, i.e. <code>which any_machine</code>, and you'll see a <code>--time=$walltime</code> flag. <code>echo $walltime</code> will tell you that current walltime is a large number, like 200:00:00 -- 200 hours! Copy-paste the alias contents (starting with srun...., no single-quotes) and set the time to something smaller.
# Check ourjobs to see who is using the other nodes, and ask on the irc channel to see if anyone can free up a node.
==== My job on int_machine is getting killed, or doesn't have enough memory ====
When you request an int_machine with srun, the default is 24G. Try a higher number. Technically the max is 240G but that would mean no one else in the group can have any memory if they need to access an int_machine....so ask for no more than 216G unless you're able to vacate the node right away if asked.
==== My job is running out of time and I'd like to add more ====
{{forthcoming}}
=== Running a job across many cores using GNU R's parallelization features ===
The Mox machines have 28 cores. Running your program on all the cores can speed things up a lot. We make heavy use of R for building datasets and for fitting models. Like most programming languages, R uses only one cpu by default. However, for typical computation-heavy data science tasks it is pretty easy to make R use all the cores.
For fitting models, the R installed should use all cores automatically. This is thanks to OpenBlas, which is a numerical library that implements and parallelizes linear algebra routines like matrix factorization, matrix inversion, and other operations that bottleneck model fitting.
However, for building datasets, you need to do a little extra work. One common strategy is to break up the data into independent chunks (for example, when building wikia datasets there is one input file for each wiki) and then use <code>mcapply</code> from <code>library(parallel)</code> to build variables from each chunk. Here is an example:
    library(parallel)
    options(mc.cores=detectCores())  ## tell R to use all the cores
   
    mcaffinity(1:detectCores()) ## required and explained below
 
    library(data.table) ## for rbindlist, which concatenates a list of data.tables into a single data.table
   
    ## imagine defining a list of wikis to analyze
    ## and a function to build variables for each wiki
    source("wikilist_and_buildvars")
   
    dataset <- rbindlist(mclapply(wikilist,buildvars))
   
    mcaffinity(rep(1,detectCores())) ## return processor affinities to the status preferred by OpenBlas
A working example can be found in the [[Message Walls]] git repository.
<code>mcaffinity(1:detectCores())</code> is required for the R <code>library(parallel)</code> to use multiple cores. The reason is technical and has to do with OpenBlas. Essentially, OpenBlas changes settings that govern how R assigns processes to cores. OpenBlas wants all processes assigned to the same core, so that the other cores do not interfere with it's fancy multicore linear algebra. However, when building datasets, the linear algebra is not typically the bottleneck. The bottleneck is instead operations like sorting and merging that OpenBlas does not parallelize.
The important thing to know is that if you want to use mclapply, you need to do <code>mcaffinity(1:detectCores())</code>. If you want to then fit models you should do <code>mcaffinity(rep(1,detectCores())</code> so that OpenBlas can do its magic.
=== Running jobs across many cores with GNU parallel ===
Generate a task list:
$ find ./input/ -mindepth 1 | xargs -I {} echo "python3 /com/local/bin/wikiq {} -o ./output" > task_list
Run:
$ parallel < task_list
Connect to your node with ssh to check on it:
$ ssh '''n0648'''
$ htop


== Batch Jobs ==
== Batch Jobs ==
{{notice|This information is not fully updated yet. We'll cover this next week!}}
=== Setup for running batch jobs on Hyak (only need to be done once) ===
=== Setup for running batch jobs on Hyak (only need to be done once) ===


Create a users directory for yourself in /gscratch/comdata/users:
Create a users directory for yourself in /com/users:


You will want to store the output of your script in /gscratch/comdata, or you will run out of space in your personal filesystem (/usr/lusers/...)
You will want to store the output of your script in /com/, or you will run out of space in your personal filesystem (/usr/lusers/...)


  $ mkdir /gscratch/comdata/users/USERNAME  # Replace USERNAME with your user name
  $ mkdir /com/users/USERNAME  # Replace USERNAME with your user name


2. Create a batch_jobs directory
2. Create a batch_jobs directory


  $ mkdir /gscratch/comdata/users/USERNAME/batch_jobs
  $ mkdir /com/users/USERNAME/batch_jobs


3. Create a symlink from your home directory to this directory (this lets you use the /com storage from the more convenient home directory)
3. Create a symlink from your home directory to this directory (this lets you use the /com storage from the more convenient home directory)


  $ ln -s /gscratch/comdata/users/USERNAME/batch_jobs ~/batch_jobs
  $ ln -s /com/users/USERNAME/batch_jobs ~/batch_jobs


4. Create a user in parallel SQL
4. Create a user in parallel SQL
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)

Templates used on this page: