CommunityData:Hyak: Difference between revisions

From CommunityData
(47 intermediate revisions by 6 users not shown)
Line 1: Line 1:
To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token which you will need as part of [[CommunityData:Hyak setup|getting setup]]. The following links will be useful.
* [[CommunityData:Klone]] (for the new hyak nodes).
* [[CommunityData:Hyak setup]]
* [[CommunityData:Hyak software installation]]
* [[CommunityData:Hyak Spark]]
* [[CommunityData:Hyak Mox migration]]
* [[CommunityData:Hyak Ikt (Deprecreated)]]
* [[CommunityData:Hyak Datasets]]


{{note}} This page is intended to replace the main [[CommunityData:Hyak]] page in the near future. This is a part of our transition to the new [https://slurm.schedmd.com/ Slurm]-based job scheduler. Some of the sections may be incomplete, and the instructions may not work. Feel free to edit and fix the content that is incorrect/out-of-date.
There are a number of other sources of documentation beyond this wiki:


* [http://wiki.hyak.uw.edu Hyak User Documentation]


To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token. Details on getting set up with all three are available at [[CommunityData:Hyak setup]].
== General Introduction to Hyak ==


There are a number of other sources of documentation:
The UW Research Computing Club has put together [https://depts.washington.edu/uwrcc/getting-started-2/hyak-training/ this excellent 90 minute training video] that introduces Hyak. It's probably a good place to start for anybody trying to get up-and-running on Hyak.
 
* [http://students.washington.edu/hpcc/using-hyak/information-for-beginner-users/slides-from-training-sessions/ Slides from the UW HPC Club]
* [http://wiki.hyak.uw.edu Hyak User Documentation]


== Setting up SSH ==  
== Setting up SSH ==  
Line 16: Line 22:
I've added the following config to the file <code>~/.ssh/config</code> on my laptop (you will want to change the username):
I've added the following config to the file <code>~/.ssh/config</code> on my laptop (you will want to change the username):


   Host hyak-mox mox2.hyak.uw.edu
   Host hyak mox2.hyak.uw.edu
       User sdg1
       User '''<YOURNETID>'''
       HostName mox2.hyak.uw.edu
       HostName mox2.hyak.uw.edu
       ControlPath ~/.ssh/master-%r@%h:%p
       ControlPath ~/.ssh/master-%r@%h:%p
Line 28: Line 34:
  ps ax|grep hyak
  ps ax|grep hyak


If you find any, kill them with <code>kill <PROCESSID></code>. Once that is done, you should have no problem connecting to Hyak.
If you find any, kill them with <code>kill '''<PROCESSID>'''</code>. Once that is done, you should have no problem connecting to Hyak.


=== X11 forwarding ===
=== X11 forwarding ===


You may also want to add these two lines to your Hyak .ssh/config:
You may also want to add these two lines to your Hyak <code>.ssh/config</code> (indented under the line starting with "Host"):


  ForwardX11 yes
  ForwardX11 yes
  ForwardX11Trusted yes
  ForwardX11Trusted yes


These lines will mean that if you have "checked out" an interactive machine, you can ssh from your computer to Hyak and then directly through an addition hop to the machine (like ssh n0652). Those ForwardX11 lines means if you graph things on this session, they will open on your local display.
These lines will mean that if you have "checked out" an interactive machine, you can ssh from your computer to Hyak and then directly through an addition hop to the machine (like ssh n2347). Those ForwardX11 lines means if you graph things on this session, they will open on your local display.
 


== Connecting to Hyak ==
== Connecting to Hyak ==
Line 44: Line 49:
To connect to Hyak, you now only need to do:
To connect to Hyak, you now only need to do:


  ssh hyak-mox
  ssh hyak


It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a [https://itconnect.uw.edu/security/uw-netids/2fa/ 2-factor authentication request].
It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a [https://itconnect.uw.edu/security/uw-netids/2fa/ 2-factor authentication request].


== Setting Up Hyak ==
== Setting up your Hyak environment ==
For Mox, we have created a set of bash scripts which initialize a good data science environment.


We recommend that new users of hyak load this environment by adding
Everybody who uses Hyak as part of our group '''must''' add the following line to their <code>~/.bashrc</code> file on Hyak:


<source lang='bash'>
<source lang='bash'>
Line 57: Line 61:
</source>
</source>


to the end of your <code>~/.bashrc</code> file.
If you don't have a preferred terminal-style text editor, you might start with nano -- <code>nano ~/.bashrc</code>, arrow down, paste in the 'source....' text from above, then ^O to save and ^X to exit. You'll know you were successful when you type <code>more ~/.bashrc</code> and see the 'source....' line at the bottom of the file. Copious information about use of a terminal-style text editor is available online -- common options include nano (basic), emacs (tons of features), and vim (fast).


This does a number of useful things. It loads modern versions of R and Python and places Spark in your environment. It also provides a number of convenient commands for interacting with the SLURM HPC system for checking out nodes and monitoring jobs. Particularly important commands include  
This line will load scripts that will initialize a good data science environment and set the [[:wikipedia:umask|umask]] so that the files and directories you create are readable by others in the group. '''Please do this immediately before you do any other work on Hyak.''' When you are done, you can reload the shell by logging out and back into Hyak or by running <code lang="bash">exec bash</code>.
 
== Using the CDSC Hyak Environment ==
=== Storing Files ===
 
By default you have access to a home directory with a relatively small quota. There are several dozen terabytes of CDSC-allocated storage in <code>/gscratch/comdata/</code> and you should explore that space. Typically we download
large datasets to <code>/gscratch/comdata/raw_data</code> (see [[#New datasets|the section on new datasets]] below), processed data in <code>/gscratch/comdata/output</code>, and personal workspaces with the need for large data storage in <code>/gscratch/comdata/users/'''<YOURNETID>'''</code>.
 
=== Basic Commands ===
Once you have loaded load modern versions of R and Python and places Spark in your environment. It also provides a number of convenient commands for interacting with the SLURM HPC system for checking out nodes and monitoring jobs. Particularly important commands include
    
    
   any_machine
   any_machine


Which attempts to check out a supercomputing node.  
which attempts to check out a supercomputing node.  


   big_machine
   big_machine
Line 81: Line 94:
Displays jobs by members of the group.  
Displays jobs by members of the group.  


Read the files in <code>/gscratch/comdata/env</code> to see how these commands are created as well as other features not documented here.  
Read the files in <code>/gscratch/comdata/env</code> to see how these commands are created (or run <code>which</code>) as well as other features not documented here.


=== Anaconda ===  
=== Anaconda ===  


We recently switched to using Anaconda to manage Python on Mox.  Anaconda comes with the `conda` tool for managing python packages and versions. Multiple python environments can co-exist in a single Anaconda installation, this allows  different projects to use different versions of Python or python packages, which can be useful for maintaining projects that use old versions.
We recently switched to using Anaconda to manage Python on Hyak.  Anaconda comes with the `conda` tool for managing python packages and versions. Multiple python environments can co-exist in a single Anaconda installation, this allows  different projects to use different versions of Python or python packages, which can be useful for maintaining projects that use old versions.  
 
By default, our shared setup loads a conda environment called `minimal_ds` that provides recent versions of python packages commonly used in data science workflows.  This is probably a good setup for most use-cases, and allows everyone to use the same packages, but it can be even better to create different environments for each project.  See the [https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands anaconda documentation for how to create an environment].
 
=== Moving files from ikt to mox ===
 
You can copy files at high speed without a password between the Hyak systems using commands like the ones below (instructions from the [http://wiki.cac.washington.edu/display/hyakusers/Hyak+mox+Overview Hyak documentation]).


'''From ikt to mox'''
By default, our shared setup loads a conda environment called `minimal_ds` that provides recent versions of python packages commonly used in data science workflows.  This is probably a good setup for most use-cases, and allows everyone to use the same packages, but it can be even better to create different environments for each project.  See the [https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands anaconda documentation for how to create an environment].


    ikt1$ hyakbbcp myfile mox1.hyak.uw.edu:/gscratch/comdata/users/YOUR_ID/YOUR_DIR
To learn how to install Python packages, see the [[CommunityData:Hyak software installation#Python packages|Python packages installation instructions]] on this wiki.
    ikt1$ hyakbbcp -r mydirectory mox1.hyak.uw.edu:/gscratch/comdata/users/YOUR_DIR
 
'''From mox to ikt'''
 
    mox1$ hyakbbcp myfile ikt1.hyak.uw.edu:/com/users/YOUR_DIR
    mox1$ hyakbbcp -r mydirectory ikt1.hyak.uw.edu:/com/users/YOUR_DIR


=== SSH into compute nodes ===  
=== SSH into compute nodes ===  
The [https://wiki.cac.washington.edu/display/hyakusers/Hyak_ssh hyak wiki] has instructions for how to enable ssh within hyak. Reproduced below:
The [https://wiki.cac.washington.edu/display/hyakusers/Hyak_ssh hyak wiki] has instructions for how to enable ssh within hyak. Reproduced below:


<blockquote>
You should be able to ssh from the login node to a compute node without giving a password. If it does not work then do below steps:
You should be able to ssh from the login node to a compute node without giving a password. If it does not work then do below steps:


1) ssh-keygen
# <code>ssh-keygen</code> then press enter for each question. This will ensure default options.
 
# <code>cd ~/.ssh</code>
Press enter for each question. This will ensure default options.
# <code>cat id_rsa.pub >> authorized_keys</code>
 
2) cd .ssh
 
3) cat id_rsa.pub >> authorized_keys
</blockquote>


== Running Jobs on Hyak ==  
== Running Jobs on Hyak ==  
Line 126: Line 121:
Interactive nodes are systems where you get a <code>bash</code> shell from which you can run your code. This mode of operation is conceptually similar to running your code on your own computer, the difference being that you have access to much more CPU and memory. To check out an interactive node, run the <code>big_machine</code> or <code>any_machine</code> command from your login shell. Before running these commands, you will want to be in a [[CommunityData:Tmux|<code>tmux</code>]] or <code>screen</code> session so that you can start your job, and log off without having to worry about your job getting terminated.
Interactive nodes are systems where you get a <code>bash</code> shell from which you can run your code. This mode of operation is conceptually similar to running your code on your own computer, the difference being that you have access to much more CPU and memory. To check out an interactive node, run the <code>big_machine</code> or <code>any_machine</code> command from your login shell. Before running these commands, you will want to be in a [[CommunityData:Tmux|<code>tmux</code>]] or <code>screen</code> session so that you can start your job, and log off without having to worry about your job getting terminated.


{{note}} At a given point of time, unless you are using the <code>ckpt</code> (formerly the <code>bf</code>) queue, you can have one instance of <code>big_machine</code> and three instances of <code>any_machine</code> running at the same time. You may need to coordinate over IRC if you need to use a specific node for any reason.
{{note}} At a given point of time, unless you are using the <code>ckpt</code> (formerly the <code>bf</code>) queue, our entire group can collectiveley have one instance of <code>big_machine</code> and three instances of <code>any_machine</code> running at the same time. You may need to coordinate over IRC if you need to use a specific node for any reason.


=== Killing jobs on compute nodes ===
=== Killing jobs on compute nodes ===
Line 143: Line 138:


More information on parallelizing your R code can be found in the [https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf <code>parallel</code> package documentation].
More information on parallelizing your R code can be found in the [https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf <code>parallel</code> package documentation].
<!-- The hyak machines have 16 cpu cores.  The Mox machines will have 28! Running your program on all the cores can speed things up a lot! We make heavy use of R for building datasets and for fitting models. Like most programming languages, R uses only one cpu by default. However, for typical computation-heavy data science tasks it is pretty easy to make R use all the cores.
For fitting models, the R installed in Gentoo should use all cores automatically. This is thanks to OpenBlas, which is a numerical library that implements and parallelizes linear algebra routines like matrix factorization, matrix inversion, and other operations that bottleneck model fitting.
However, for building datasets, you need to do a little extra work. One common strategy is to break up the data into independent chunks (for example, when building wikia datasets there is one input file for each wiki) and then use <code>mcapply</code> from <code>library(parallel)</code> to build variables from each chunk. Here is an example:
    library(parallel)
    options(mc.cores=detectCores())  ## tell R to use all the cores
   
    mcaffinity(1:detectCores()) ## required and explained below
 
    library(data.table) ## for rbindlist, which concatenates a list of data.tables into a single data.table
   
    ## imagine defining a list of wikis to analyze
    ## and a function to build variables for each wiki
    source("wikilist_and_buildvars")
   
    dataset <- rbindlist(mclapply(wikilist,buildvars))
   
    mcaffinity(rep(1,detectCores())) ## return processor affinities to the status preferred by OpenBlas
A working example can be found in the [[Message Walls]] git repository.
<code>mcaffinity(1:detectCores())</code> is required for the gentoo R <code>library(parallel)</code> to use multiple cores. The reason is technical and has to do with OpenBlas. Essentially, OpenBlas changes settings that govern how R assigns processes to cores. OpenBlas wants all processes assigned to the same core, so that the other cores do not interfere with it's fancy multicore linear algebra. However, when building datasets, the linear algebra is not typically the bottleneck. The bottleneck is instead operations like sorting and merging that OpenBlas does not parallelize.
The important thing to know is that if you want to use mclapply, you need to do <code>mcaffinity(1:detectCores())</code>. If you want to then fit models you should do <code>mcaffinity(rep(1,detectCores())</code> so that OpenBlas can do its magic. -->


=== Using the Checkpoint Queue ===
=== Using the Checkpoint Queue ===
Line 175: Line 143:
Hyak has a special way of scheduling jobs using the '''checkpoint queue'''.  When you run jobs on the checkpoint queue, they run on someone else's hyak node that they aren't using right now.  This is awesome as it gives us a huge amount of free (as in beer) computing.  But using the checkpoint queue does take some effort, mainly because your jobs can get killed at any time if the owner of the node checks it out.  So if you want to run a job for more than a few minutes on the checkpoint queue it will need to be able to "checkpoint" by saving it's state periodically and then restarting.  
Hyak has a special way of scheduling jobs using the '''checkpoint queue'''.  When you run jobs on the checkpoint queue, they run on someone else's hyak node that they aren't using right now.  This is awesome as it gives us a huge amount of free (as in beer) computing.  But using the checkpoint queue does take some effort, mainly because your jobs can get killed at any time if the owner of the node checks it out.  So if you want to run a job for more than a few minutes on the checkpoint queue it will need to be able to "checkpoint" by saving it's state periodically and then restarting.  


This would be a pain to do manually, fortunately, we have the <code>[http://dmtcp.sourceforge.net/FAQ.html dmtcp] </code> module which can automatically checkpoint and resume most programs.
This would be a pain to do manually, fortunately, we have <code>[http://dmtcp.sourceforge.net/FAQ.html dmtcp] </code> which can automatically checkpoint and resume most programs.


Nate's working got dmtcp working for arbitrary scripts, and also with wikiq using parallel_sql.
Nate's working got dmtcp working for arbitrary scripts, and also with wikiq using parallel_sql.


To use dmtcp load the module:
dmtcp 3.0 is installed on Mox.


  $ module load dmtcp/2.6.0
This will make more sense if you know that dmtcp works by starting a '''coordinator''' process which is responsible for pausing and saving the checkpointed process.  A [https://hpcc.usc.edu/support/documentation/checkpointing/ tutorial on dmtcp with slurm from USC] has a bash function for starting the coordinator called <code>start_coordinator</code>.  This function will show up in your environment when you load the <code> dmtcp/2.6.0 </code> module.


This will make more sense if you know that dmtcp works by starting a '''coordinator''' process which is responsible for pausing and saving the checkpointed process.  A [https://hpcc.usc.edu/support/documentation/checkpointing/ tutorial on dmtcp with slurm from USC] has a bash function for starting the coordinator called <code>start_dmtcp_coordinator</code>. Nate added this function to the shared .bashrc. So it should be available in your environment on Mox.
==== Starting a checkpoint queue job ====
==== Starting a checkpoint queue job ====
To start a checkpoint queue job we'll use <code>sbatch</code> instead of srun.  See the [https://slurm.schedmd.com/sbatch.html documentation] for a refresher starting hpc jobs using sbatch.
To start a checkpoint queue job we'll use <code>sbatch</code> instead of srun.  See the [https://slurm.schedmd.com/sbatch.html documentation] for a refresher starting hpc jobs using sbatch.
Line 194: Line 161:
     #SBATCH --partition=ckpt
     #SBATCH --partition=ckpt


=== Jupyter Notebook on Hyak ===
You'll might have other stuff in your SBATCH script to request a certain number of cores or memory. Those will matter when we run <code>wikiq</code> below, but here they can be whatever they would be if you were running an <code>sbatch</code> job on one of our machines.  The next thing you need to do specifically for a <code>ckpt</code> job is to run <code>start_coordinator</code>.  This function takes care of making sure that we start a coordinator using the right set of ports and temporary files. We still need to pass in the '''interval''' that we want checkpoints. The bigger this interval the faster your job will run but the more work will be lost when it's interrupted.


<!-- 1. Choose a number you are going to use as a port. We should each use a different port and the number should be between 1000 and 65000. It doesn't matter what it is but it needs to be unique. Pick something unique. In the following instructions, replace '''$PORT''' with your number below.
    start_dmtcp_coordinator -i 600  #checkpoint every 10 minutes


2. Connect to Hyak and forward the the port from you local machine to the new one:
Next you need to run your job in a special way so that it is managed by <code>dmtcp</code> and restarted if it gets interrupted. 


ssh -L localhost:'''$PORT''':localhost:'''$PORT''' '''username'''@hyak.washington.edu
    # The restart script is created by dmtcp_launch after initialization
 
    if [ -x dmtcp_restart_script.sh ]; then
You can also add the following line to the Hyak section on your local .ssh/config file on your laptop:
        bash dmtcp_restart_script.sh
 
    else
    LocalForward '''$PORT''' localhost:'''$PORT'''
        # On first pass, run program under DMTCP
 
        dmtcp_launch --rm $your_script.sh # must run interpreter for scripts
3. We're going to need to connect to one of the compute servers ''twice''. As a result, we'll use a program called <code>tmux</code>. Tmux is very similar (but a little easier to learn) than a program called <code>screen</code>. If you know screen, just use that. Otherwise, run tmux like:
    fi
tmux
 
 
This works because <code>dmtcp_restart_script.sh</code> is created when you launch your job using <code>dmtcp_launch</code>. If that script exists your job should run it instead of your job.
You can tell you're in tmux because of the green line at the bottom of the screen.
 
4. "Check out" a compute node
 
any_machine
 
5.
 
Keep track of which machine you are on. It should be something like '''n0650''' and it should be displayed on the prompt. We'll refer to it as '''$HOST''' below.
 
6. Start jupyter on the compute node:
 
jupyter-notebook --no-browser --port='''$PORT'''
 
You'll see that jupyter just keeps running in the background. This can be useful because when there are errors, they will sometimes be displayed in this terminal. Generally, you can just ignore this though.
 
6. Create a new window in tmux/screen
 
At this point, you have jupyter running on the compute node on $PORT. You also will have forwarded the port from your laptop to the login node. We're really only missing one thing which is the tunnel from the login node to the compute node within hyak. To do this, we'll create a new window inside tmux with the keystroke '''Ctrl-b c'''.
 
If you're not familiar with it, you'll want to read the [[CommunityData:tmux]] which includes a quick cheatsheet. To switch back to the original window running jupyter, you should type: '''Ctrl-b 0'''. If you switch though, be sure to switch back to the new window with '''Ctrl-b 1'''.
 
Because you originally ran tmux on the login node, the new window/terminal will be opened within tmux on the login node.
 
7. Open a tunnel from the login node to the compute node.
 
  ssh -L localhost:'''$PORT''':localhost:'''$PORT''' '''$HOST'''
 
8. In your local browser, localhost:'''$PORT'''
-->


==== Set up a password for Jupyter Notebook on Hyak ====
There are options that you can pass to <code>dmtcp_launch</code> that can be important.  In particular <code>--checkpoint-open-files</code> and <code> --allow-file-overwrite </code> modify how IO is checkpointed.


=== Working on Hyak from a local emacs client ===
==== Running wikiq with dmtcp and parallel_sql ====


<!-- Some of us (like Nate) rely heavily on the Emacs text editor. [http://ess.r-project.org/| Emacs speaks statistics] is a powerful emacs mode for programming in R and doing data analysis.  There are a few options for using Emacs on hyak. If you open emacs on an interactive node with X-forwarding enabled then you will get a nice graphical emacs window and plots you make will be displayed on your screen. But if you disconnect from Hyak you will lose your R session.  This makes running emacs the normal way on an interactive node unsuitable for fitting models.  Another  disadvantage is that your will be working with an x-forwarded emacs and so will not look as nice or be as responsive as your local emacs.
To run wikiq with parallelsql the following need to be arranged:


Alternatively, you might run emacs in console mode in tmux. Then Hyak will keep running your R process even when you log out. The downsides here is that you can't view plots on your display (you could save them as a pdf, and then open the pdf on your local machine) and that some emacs key chords will collide with tmux key bindings and configuring tmux to fix this is a pain.
# A shell script for each dumpfile that makes a workspace for <code>dmtcp</code> to keep it's data and restart script.
# These shell scripts loaded in <code>parallel sql</code>.
# A <code>sbatch</code> script that gets a checkpoint node and starts running jobs from <code>parallel_sql</code>.
# You need to restart jobs that get interrupted using parallel sql.


A better way is to run emacs server on a compute node on hyak and then open a local emacs client that connects to that server.
You first need to set up parallel_sql on Hyak: https://wiki.cac.washington.edu/display/hyakusers/Hyak+parallel-sql#Hyakparallel-sql-Usingparallel-sql


=== Instructions For ESS ===
Nate made a python script that generates the scripts and makes a file with all the scripts. Notice that each dumpfile gets a script, it's own checkpoint directory, and a line in <code>wikiq_parallel_jobs.sh</code>
''Unfortunately, this requires running emacsserver on a login node and viewing plots does not work. These problems should go away if hyak let us forward X from a compute node and tunnel it through a login node. This doesn't seem to work as ssh -X n0649 doesn't seem to forward X.''


1. Open tmux on a ''login node'' and start emacsserver.
<syntaxhighlight lang='python'>
  $ tmux
#!/usr/bin/env python3
  $ emacs --daemon
from os import path
import os
import stat
import glob


2. Still in tmux, start an interactive session
archives = glob.glob("/gscratch/comdata/raw_data/wikia_dumps/2010-04-mako/*.xml.7z")
  $ any_machine


3. In a new terminal (not tmux) ssh into the login node and start an emacs client (-c means in a new window).
scripts_dir = '/gscratch/comdata/users/nathante/wikiq_parallel_scripts'
  $ emacsclient -c
output_dir =  '/gscratch/comdata/users/nathante/wikiq_output'
checkpoint_dir = '/gscratch/comdata/users/nathante/wikiq_checkpoint'


4. In this emacsclient open a shell, ssh to the compute node, and start an R process.  
if not path.isdir(scripts_dir):
  M-x shell
    os.mkdir(scripts_dir)
  $ ssh n0649


5. With focus on the R process buffer in emacs, connect ESS to the R process.  
if not path.isdir(output_dir):
  M-x ess-remote
    os.mkdir(output_dir)
-->


== Custom software in Hyak ==
script ="""#!/bin/bash
mkdir -p {0}
cd {0}
start_dmtcp_coordinator -i 60  #checkpoint every 20 minutes


=== R packages ===
if [ -x dmtcp_restart_script.sh ]; then
    bash dmtcp_restart_script.sh
else
    # On first pass, run program under DMTCP
    dmtcp_launch --rm {1}
fi
"""


To install a R package that's not available globally, you can check out a build node, and install the package locally. Here's how to do it:
with open("wikiq_parallel_jobs.sh",'w') as calls:
    for dumpfile in archives:
        wikiq_base_call = f"wikiq -u -o {output_dir} {dumpfile}"
        wikiq_call = wikiq_base_call
        wiki = path.split(dumpfile)[1]
        wikiq_script = script.format( path.join(checkpoint_dir,wiki), wikiq_call)


<source lang="bash">
        script_file = path.join(scripts_dir, wiki + '.sh')
$ build_machine
        with open(script_file,'w') as of:
$ R
            of.write(wikiq_script)
</source>
       
        os.chmod(script_file,os.stat(script_file).st_mode | stat.S_IEXEC)


This will start R, where you can install a package in the usual way. The build node has access to the Internet, so it will be able to download the required source packages, etc.
        calls.write(script_file)
        calls.write('\n')
</syntaxhighlight>


<source lang="r">
We also need an sbatch script as <code>parallel_sql_job.sh</code>.
> install.packages('lme4')
<syntaxhighlight lang='bash'>
</source>
#!/bin/bash
## parallel_sql_job.sh
#SBATCH --job-name=wikiq_dmtcp
## Allocation Definition
#SBATCH --account=comdata-ckpt
#SBATCH --partition=ckpt
## Resources
## Nodes. This should always be 1 for parallel-sql.
#SBATCH --nodes=1   
## Walltime (12 hours)
#SBATCH --time=12:00:00
## Memory per node
#SBATCH --mem=100G


=== Python Packages ===
module load parallel_sql


DO NOT TRUST THIS SECTION. Intel python appears to have some issues.  
#Put here commands to load other modules (e.g. matlab etc.)
#Below command means that parallel_sql will get tasks from the database
#and run them on the node (in parallel). So a 16 core node will have
#16 tasks running at one time.
parallel-sql --sql -a parallel --exit-on-term
</syntaxhighlight>


The recommended python to use on hyak is the intel-python. This is a customized anaconda distribution with a magical optimization of python that really increases the performance of numpy.
Next load the scripts into <code>parallel_sql</code>


Using an anaconda python distribution has important implications for how you install packages. While in normal python, you would install python packages using `pip`, when you use an anaconda distribution you should use `conda` to install packages. Conda also has some fancy features like virtual environments for using different versions of python or different versions of packages in different projects. The problem with using conda is that it does not include all the packages you might want to use. If you want to install a python package that is missing from conda, you can use pip.
  module load parallel_sql
  cat wikiq_parallel_jobs.sh | psu --load


Importantly, when using intel-python, you should prefer to install software using conda over pip.  
We can now fire up a whole bunch of checkpoint nodes. The limit is technically 2000!  But let's just ask for 10 nodes :)


[https://conda.io/docs/ Conda Documentation]
  for job in $(seq 1 10); do sbatch parallel_sql_job.sh; done
[https://pip.pypa.io/en/stable/ Pip Documentation]


The first time you use intel-python you need to create a custom environment for installing software:
If our jobs get interrupted we'll need to run <code> psu --reset-slurm </code> to set them back into '''avail''' state. We can run a little script running on a login node to do this automatically every minute or so.


    conda create -n my_root
<syntaxhighlight lang='python'>
#!/usr/bin/env python3
## auto_reset_psu.py
import time
import subprocess


Then add the following to your .bashrc to use this environment.  
running = subprocess.run(["psu", "--show-running"],  universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
     if [ -z $(conda info --env | grep my_root | grep \*) ]; then
print(running)
        source activate my_root
while hasattr(running, 'stdout') and len(running.stdout) > 0:
     fi
     subprocess.run(["psu","--reset-slurm"])
    time.sleep(60)
     running = subprocess.run(["psu", "--show-running"],  stdout=subprocess.PIPE)
</syntaxhighlight>


Conda doesn't like it when you try to activate an environment that is already active. T
That's it! Unleash the power of the checkpoint queue!  Reach out to Nate if you try this and have problems or if you have any questions!


Conda modifies your prompt in a possibly annoying way. To disable this behavior run the command:
== New Datasets ==
    $ conda config --set changeps1 False


If you want to download a new dataset to Hyak you should first check to ensure that is enough space on the current allocation (e.g., with <code>cat /gscratch/comdata/usage_report.txt</code>. If there is not enough space in our allocation, contact [[Mako]] about getting our allocation increased. It should be fast and easy.


If there is enough space, you should download data to <code>/gscratch/comdata/raw_data/YOURNEWDATASET</code>.


<!-- If you need python libraries that are not installed in the shared environment:
Once you have finished downloading, you should set all the files you have downloaded as read only to prevent people from accidently creating new files, overwriting data, etc. You can do that with the following commands:


$ pip3 install --user YOURLIBHERE
<syntaxhighlight lang='bash'>
$ cd /gscratch/comdata/raw_data/YOURNEWDATASET
$ find . -not -type d -print0 |xargs -0 chmod 440
$ find . -type d -print0 |xargs -0 chmod 2550
</syntaxhighlight>


...replacing YOURLIBHERE with the name of the library you need, e.g. 'pandas'. The --user option will install it for just you.
== Common Troubles and How to Solve Them ==
=== Help! I'm over CPU quota and Hyak is angry! ===


If you have a lot of dependencies for a specific project, consider using [[#Python Virtual Environments |Python Virtual Environments]] -->
'''Don't panic.''' Everyone has done this at least once. Mako has done it dozens of times. It is a little bit difficult to deal with but can be solved. You are not in trouble.


=== Custom modules ===
The usual reason for this to happen is because you've accidentally run something on a login node that ought to be run on a compute node. The solution is to find the badly behaved process and then use kill to kill the process.


Software on Hyak can be outdated, or in some cases, not available at all. In some of these situations, it may be possible to use [http://modules.sourceforge.net/ environment modules] to install and run software without necessitating administrative (root) privileges. For example, it is possible to have and run the newest version of R that is installed in a central, shared directory, and it is even possible to have multiple versions of R available in parallel. The following subsection shows how to do this. Ordinarily, this should not be necessary on a day-to-day basis.
If it's a script or command on your commandline, '''Ctrl-c''' to kill it. If you backgrounded it, type <code>fg</code> to foreground it and then '''Ctrl-c'''. But if you ran parallel, you'll need to kill parallel itself.


==== Installing and making available a custom module ====
<code>ps -faux | grep <your username></code> will show you all the things you are running (or have someone else run it for you if the spam is so terrible you can't get a command to run). The first column has the usernames, the second column has the process IDs, the last column has the things you're running.


[[File:faux.jpg]]


{{note}} If you are using <code>screen</code> to run and manage your builds, keep in mind that <code>screen</code> [https://superuser.com/a/235773 drops a few environment variables] such as <code>LD_LIBRARY_PATH</code>, which may mess up your build process. You should check that all the relevant environment variables are set before starting your build.
In the screenshot, the red is the user name being grepped for. At the end of the line the last three entries are the time (in hyak time, type date if you want to compare hyak time to your time), then how much CPU time something has consumed, then a little diagram of parent and child processes. You want parallel (in the example, 9977).  
 
 
The first step toward installing and making available a custom module (in this case, R 3.5.0) is to spin up the build node, download R, compile it with a specific prefix, and install it.
 
<source lang='bash'>
$ build_machine
$ module load contrib/texlive/2017  # loads the texlive module that is helpful for generating R documentation
$ module load contrib/openblas/0.2.20  # loads the openblas library, which speeds up some R operations significantly
$ wget https://cran.r-project.org/src/base/R-3/R-3.5.0.tar.gz
$ tar xzvf R-3.5.0.tar.gz
$ cd R-3.5.0
$ ./configure --prefix=/gscratch/comdata/modules/sw/R/3.5.0  --with-x --enable-R-shlib --with-lapack --with-blas="-L/sw/contrib/openblas/0.2.20/lib -lopenblas"
$ make
$ make install
</source>
 
The <code>--prefix</code> option to <code>./configure</code> tells the build scripts that R is going to be installed in <code>/gscratch/mako/modules/sw/R/3.5.0</code>. This follows a convention that we picked—software in modules should go into <code>/gscratch/mako/modules/sw/{SOFTWARE_NAME}/{SOFTWARE_VERSION}</code>. The <code>--prefix</code> option is the most important flag for <code>./configure</code>—any other flag or option will be specific to the software being installed.
 
The second step is to write a <code>modulefile</code>. This contains the metadata about our module. Edit the file <code>/gscratch/mako/modules/modulefiles/R/3.5.0</code> to contain the following
 
<source lang='tcl'>
#%Module1.0####################################################################
##
proc ModulesHelp { } {
        puts stderr "\tModule providing R 3.5.0."
}
 
module-whatis "Module providing R 3.5.0."
 
module load contrib/openblas/0.2.20
prepend-path    PATH            /gscratch/mako/modules/sw/R/3.5.0/bin
prepend-path    MANPATH        /gscratch/mako/modules/sw/R/3.5.0/share/man
 
# The following line prevents everyone from installing libraries in the global namespace
file mkdir ~/R/x86_64-pc-linux-gnu-library/3.5
 
</source>
 
Note that the filename follows a similar convention as <code>--prefix</code> earlier (<code>/gscratch/mako/modules/modulefiles/{SOFTWARE_NAME}/{SOFTWARE_VERSION}</code>). This file sets up the <code>PATH</code> and <code>MANPATH</code> environment variables appropriately so that the specified version of R can be accessed and run as needed. There are many more directives that can go into the <code>modulefile</code>—see <code>man modulefile</code> for details on those directives.
 
Once this file is written out, the <code>module avail</code> command should list <code>R/3.5.0</code> as an available module. This is because the module system is set up to look inside <code>/gscratch/mako/modules/modulefiles</code> for module files, thanks to the <code>MODULEPATH</code> variable that is set through <code>.bashrc</code>. The command <code>module load R/3.5.0</code> should make R available and ready for use. To avoid running <code>module load R/3.5.0</code> whenever you log in, you can add the command at the end of your <code>.bashrc</code> file (after the section that sets <code>MODULEPATH</code>).
 
== Spack ==
 
To use spack to manage software on hyak, add the following to your .bashrc.
 
<source>
## we need to load these modules to use proprietary hyak compilers to get faster code.                                                                                                                                       
module load icc_18-impi_2018
module load icc_18
export LD_LIBRARY_PATH = /sw/intel-2018/lib/intel64:$LD_LIBRARY_PATH
# For bash/zsh users
export SPACK_ROOT=/gscratch/mako/spack/
. $SPACK_ROOT/share/spack/setup-env.sh
 
export PATH=$SPACK_ROOT/bin:$PATH
 
</source>


For directions on working with spack, see the [https://spack.readthedocs.io/en/latest/tutorial_basics.html spack documentation].
Killing the child process (in the example, 9992) won't likely help because parallel will just go on to the next task you queued up for it. You will need to run something like: <code>kill <process id></code>

Revision as of 18:59, 7 April 2021

To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token which you will need as part of getting setup. The following links will be useful.

There are a number of other sources of documentation beyond this wiki:

General Introduction to Hyak

The UW Research Computing Club has put together this excellent 90 minute training video that introduces Hyak. It's probably a good place to start for anybody trying to get up-and-running on Hyak.

Setting up SSH

When you connect to SSH, it will ask you for a key from your token. Typing this in every time you start a connection be a pain. One approach is to create an .ssh config file that will create a "tunnel" the first time you connect and send all subsequent connections to Hyak over that tunnel. Some details in the Hyak documentation.

I've added the following config to the file ~/.ssh/config on my laptop (you will want to change the username):

 Host hyak mox2.hyak.uw.edu
     User <YOURNETID>
     HostName mox2.hyak.uw.edu
     ControlPath ~/.ssh/master-%r@%h:%p
     ControlMaster auto
     ControlPersist yes
     Compression yes

Note Note: If your SSH connection becomes stale or disconnected (e.g., if you change networks) it may take some time for the connection to time out. Until that happens, any connections you make to hyak will silently hang. If your connections to ssh hyak are silently hanging but your Internet connection seems good, look for ssh processes running on your local machine with:

ps ax|grep hyak

If you find any, kill them with kill <PROCESSID>. Once that is done, you should have no problem connecting to Hyak.

X11 forwarding

You may also want to add these two lines to your Hyak .ssh/config (indented under the line starting with "Host"):

ForwardX11 yes
ForwardX11Trusted yes

These lines will mean that if you have "checked out" an interactive machine, you can ssh from your computer to Hyak and then directly through an addition hop to the machine (like ssh n2347). Those ForwardX11 lines means if you graph things on this session, they will open on your local display.

Connecting to Hyak

To connect to Hyak, you now only need to do:

ssh hyak

It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a 2-factor authentication request.

Setting up your Hyak environment

Everybody who uses Hyak as part of our group must add the following line to their ~/.bashrc file on Hyak:

source /gscratch/comdata/env/cdsc_mox_bashrc

If you don't have a preferred terminal-style text editor, you might start with nano -- nano ~/.bashrc, arrow down, paste in the 'source....' text from above, then ^O to save and ^X to exit. You'll know you were successful when you type more ~/.bashrc and see the 'source....' line at the bottom of the file. Copious information about use of a terminal-style text editor is available online -- common options include nano (basic), emacs (tons of features), and vim (fast).

This line will load scripts that will initialize a good data science environment and set the umask so that the files and directories you create are readable by others in the group. Please do this immediately before you do any other work on Hyak. When you are done, you can reload the shell by logging out and back into Hyak or by running exec bash.

Using the CDSC Hyak Environment

Storing Files

By default you have access to a home directory with a relatively small quota. There are several dozen terabytes of CDSC-allocated storage in /gscratch/comdata/ and you should explore that space. Typically we download large datasets to /gscratch/comdata/raw_data (see the section on new datasets below), processed data in /gscratch/comdata/output, and personal workspaces with the need for large data storage in /gscratch/comdata/users/<YOURNETID>.

Basic Commands

Once you have loaded load modern versions of R and Python and places Spark in your environment. It also provides a number of convenient commands for interacting with the SLURM HPC system for checking out nodes and monitoring jobs. Particularly important commands include

 any_machine

which attempts to check out a supercomputing node.

 big_machine

Requests a node with 240GB of memory.

 build_machine

Checks out a build node which can access the internet and is intended to be used to install software.

 ourjobs

Prints all the running jobs by people in the group.

 myjobs

Displays jobs by members of the group.

Read the files in /gscratch/comdata/env to see how these commands are created (or run which) as well as other features not documented here.

Anaconda

We recently switched to using Anaconda to manage Python on Hyak. Anaconda comes with the `conda` tool for managing python packages and versions. Multiple python environments can co-exist in a single Anaconda installation, this allows different projects to use different versions of Python or python packages, which can be useful for maintaining projects that use old versions.

By default, our shared setup loads a conda environment called `minimal_ds` that provides recent versions of python packages commonly used in data science workflows. This is probably a good setup for most use-cases, and allows everyone to use the same packages, but it can be even better to create different environments for each project. See the anaconda documentation for how to create an environment.

To learn how to install Python packages, see the Python packages installation instructions on this wiki.

SSH into compute nodes

The hyak wiki has instructions for how to enable ssh within hyak. Reproduced below:

You should be able to ssh from the login node to a compute node without giving a password. If it does not work then do below steps:

  1. ssh-keygen then press enter for each question. This will ensure default options.
  2. cd ~/.ssh
  3. cat id_rsa.pub >> authorized_keys

Running Jobs on Hyak

When you first log in to Hyak, you will be on a "login node". These are nodes that have access to the Internet, and can be used to update code, move files around, etc. They should not be used for computationally intensive tasks. To actually run jobs, there are a few different options, described in detail in the Hyak User documentation. Following are basic instructions for some common use cases.

Interactive nodes

Interactive nodes are systems where you get a bash shell from which you can run your code. This mode of operation is conceptually similar to running your code on your own computer, the difference being that you have access to much more CPU and memory. To check out an interactive node, run the big_machine or any_machine command from your login shell. Before running these commands, you will want to be in a tmux or screen session so that you can start your job, and log off without having to worry about your job getting terminated.

Note Note: At a given point of time, unless you are using the ckpt (formerly the bf) queue, our entire group can collectiveley have one instance of big_machine and three instances of any_machine running at the same time. You may need to coordinate over IRC if you need to use a specific node for any reason.

Killing jobs on compute nodes

The Slurm scheduler provides a command called scancel to terminate jobs. For example, you might run queue_state from a login node to figure out the ID number for your job (let's say it's 12345), then run scancel --signal=TERM 12345 to send a SIGTERM signal or scancel --signal=KILL 12345 to send a SIGKILL signal that will bring job 12345 to an end.

Parallel R

The nodes on Hyak have 28 CPU cores. These may help in speeding up your analysis significantly. If you are using R functions such as lapply, there are parallelized equivalents (e.g. mclappy) which can take advantage of all the cores and give you a 2800% boost! However, something to be aware of here is your code's memory requirement—if you are running 28 processes in parallel, your memory needs can also go up to 28x, which may be more than the ~200GB that the big_machine node will have. In such cases, you may want to dial down the number of CPU cores being used—a way to do that globally in your code is to run the following snippet of code before calling any of the parallelized functions.

library(parallel)
options(mc.cores=20)  ## tell the mc* functions to use 20 cores unless otherwise specified
mcaffinity(1:20)

More information on parallelizing your R code can be found in the parallel package documentation.

Using the Checkpoint Queue

Hyak has a special way of scheduling jobs using the checkpoint queue. When you run jobs on the checkpoint queue, they run on someone else's hyak node that they aren't using right now. This is awesome as it gives us a huge amount of free (as in beer) computing. But using the checkpoint queue does take some effort, mainly because your jobs can get killed at any time if the owner of the node checks it out. So if you want to run a job for more than a few minutes on the checkpoint queue it will need to be able to "checkpoint" by saving it's state periodically and then restarting.

This would be a pain to do manually, fortunately, we have dmtcp which can automatically checkpoint and resume most programs.

Nate's working got dmtcp working for arbitrary scripts, and also with wikiq using parallel_sql.

dmtcp 3.0 is installed on Mox.


This will make more sense if you know that dmtcp works by starting a coordinator process which is responsible for pausing and saving the checkpointed process. A tutorial on dmtcp with slurm from USC has a bash function for starting the coordinator called start_dmtcp_coordinator. Nate added this function to the shared .bashrc. So it should be available in your environment on Mox.

Starting a checkpoint queue job

To start a checkpoint queue job we'll use sbatch instead of srun. See the documentation for a refresher starting hpc jobs using sbatch.

To request a job on the checkpoint queue put the following in the top of your sbatch script.

   #SBATCH --export=ALL
   #SBATCH --account=comdata-ckpt
   #SBATCH --partition=ckpt

You'll might have other stuff in your SBATCH script to request a certain number of cores or memory. Those will matter when we run wikiq below, but here they can be whatever they would be if you were running an sbatch job on one of our machines. The next thing you need to do specifically for a ckpt job is to run start_coordinator. This function takes care of making sure that we start a coordinator using the right set of ports and temporary files. We still need to pass in the interval that we want checkpoints. The bigger this interval the faster your job will run but the more work will be lost when it's interrupted.

   start_dmtcp_coordinator -i 600  #checkpoint every 10 minutes

Next you need to run your job in a special way so that it is managed by dmtcp and restarted if it gets interrupted.

   # The restart script is created by dmtcp_launch after initialization
   if [ -x dmtcp_restart_script.sh ]; then
       bash dmtcp_restart_script.sh
   else
       # On first pass, run program under DMTCP
       dmtcp_launch --rm $your_script.sh	# must run interpreter for scripts
   fi
 

This works because dmtcp_restart_script.sh is created when you launch your job using dmtcp_launch. If that script exists your job should run it instead of your job.

There are options that you can pass to dmtcp_launch that can be important. In particular --checkpoint-open-files and --allow-file-overwrite modify how IO is checkpointed.

Running wikiq with dmtcp and parallel_sql

To run wikiq with parallelsql the following need to be arranged:

  1. A shell script for each dumpfile that makes a workspace for dmtcp to keep it's data and restart script.
  2. These shell scripts loaded in parallel sql.
  3. A sbatch script that gets a checkpoint node and starts running jobs from parallel_sql.
  4. You need to restart jobs that get interrupted using parallel sql.

You first need to set up parallel_sql on Hyak: https://wiki.cac.washington.edu/display/hyakusers/Hyak+parallel-sql#Hyakparallel-sql-Usingparallel-sql

Nate made a python script that generates the scripts and makes a file with all the scripts. Notice that each dumpfile gets a script, it's own checkpoint directory, and a line in wikiq_parallel_jobs.sh

#!/usr/bin/env python3
from os import path
import os
import stat
import glob

archives = glob.glob("/gscratch/comdata/raw_data/wikia_dumps/2010-04-mako/*.xml.7z")

scripts_dir = '/gscratch/comdata/users/nathante/wikiq_parallel_scripts'
output_dir =  '/gscratch/comdata/users/nathante/wikiq_output'
checkpoint_dir = '/gscratch/comdata/users/nathante/wikiq_checkpoint'

if not path.isdir(scripts_dir):
    os.mkdir(scripts_dir)

if not path.isdir(output_dir):
    os.mkdir(output_dir)

script ="""#!/bin/bash
mkdir -p {0}
cd {0}
start_dmtcp_coordinator -i 60  #checkpoint every 20 minutes

if [ -x dmtcp_restart_script.sh ]; then
    bash dmtcp_restart_script.sh
else
    # On first pass, run program under DMTCP
    dmtcp_launch --rm {1}
fi
"""

with open("wikiq_parallel_jobs.sh",'w') as calls:
    for dumpfile in archives:
        wikiq_base_call = f"wikiq -u -o {output_dir} {dumpfile}"
        wikiq_call = wikiq_base_call
        wiki = path.split(dumpfile)[1]
        wikiq_script = script.format( path.join(checkpoint_dir,wiki), wikiq_call)

        script_file = path.join(scripts_dir, wiki + '.sh')
        with open(script_file,'w') as of:
            of.write(wikiq_script)
        
        os.chmod(script_file,os.stat(script_file).st_mode | stat.S_IEXEC)

        calls.write(script_file)
        calls.write('\n')

We also need an sbatch script as parallel_sql_job.sh.

#!/bin/bash
## parallel_sql_job.sh
#SBATCH --job-name=wikiq_dmtcp
## Allocation Definition
#SBATCH --account=comdata-ckpt
#SBATCH --partition=ckpt
## Resources
## Nodes. This should always be 1 for parallel-sql.
#SBATCH --nodes=1    
## Walltime (12 hours)
#SBATCH --time=12:00:00
## Memory per node
#SBATCH --mem=100G

module load parallel_sql

#Put here commands to load other modules (e.g. matlab etc.)
#Below command means that parallel_sql will get tasks from the database
#and run them on the node (in parallel). So a 16 core node will have
#16 tasks running at one time.
parallel-sql --sql -a parallel --exit-on-term

Next load the scripts into parallel_sql

 module load parallel_sql
 cat wikiq_parallel_jobs.sh | psu --load

We can now fire up a whole bunch of checkpoint nodes. The limit is technically 2000! But let's just ask for 10 nodes :)

  for job in $(seq 1 10); do sbatch parallel_sql_job.sh; done

If our jobs get interrupted we'll need to run psu --reset-slurm to set them back into avail state. We can run a little script running on a login node to do this automatically every minute or so.

#!/usr/bin/env python3
## auto_reset_psu.py
import time
import subprocess

running = subprocess.run(["psu", "--show-running"],  universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(running)
while hasattr(running, 'stdout') and len(running.stdout) > 0:
    subprocess.run(["psu","--reset-slurm"])
    time.sleep(60)
    running = subprocess.run(["psu", "--show-running"],  stdout=subprocess.PIPE)

That's it! Unleash the power of the checkpoint queue! Reach out to Nate if you try this and have problems or if you have any questions!

New Datasets

If you want to download a new dataset to Hyak you should first check to ensure that is enough space on the current allocation (e.g., with cat /gscratch/comdata/usage_report.txt. If there is not enough space in our allocation, contact Mako about getting our allocation increased. It should be fast and easy.

If there is enough space, you should download data to /gscratch/comdata/raw_data/YOURNEWDATASET.

Once you have finished downloading, you should set all the files you have downloaded as read only to prevent people from accidently creating new files, overwriting data, etc. You can do that with the following commands:

$ cd /gscratch/comdata/raw_data/YOURNEWDATASET
$ find . -not -type d -print0 |xargs -0 chmod 440
$ find . -type d -print0 |xargs -0 chmod 2550

Common Troubles and How to Solve Them

Help! I'm over CPU quota and Hyak is angry!

Don't panic. Everyone has done this at least once. Mako has done it dozens of times. It is a little bit difficult to deal with but can be solved. You are not in trouble.

The usual reason for this to happen is because you've accidentally run something on a login node that ought to be run on a compute node. The solution is to find the badly behaved process and then use kill to kill the process.

If it's a script or command on your commandline, Ctrl-c to kill it. If you backgrounded it, type fg to foreground it and then Ctrl-c. But if you ran parallel, you'll need to kill parallel itself.

ps -faux | grep <your username> will show you all the things you are running (or have someone else run it for you if the spam is so terrible you can't get a command to run). The first column has the usernames, the second column has the process IDs, the last column has the things you're running.

Faux.jpg

In the screenshot, the red is the user name being grepped for. At the end of the line the last three entries are the time (in hyak time, type date if you want to compare hyak time to your time), then how much CPU time something has consumed, then a little diagram of parent and child processes. You want parallel (in the example, 9977).

Killing the child process (in the example, 9992) won't likely help because parallel will just go on to the next task you queued up for it. You will need to run something like: kill <process id>