CommunityData:Hyak

This page is intended to replace the main CommunityData:Hyak page in the near future. This is a part of our transition to the new Slurm-based job scheduler. Some of the sections may be incomplete, and the instructions may not work. Feel free to edit and fix the content that is incorrect/out-of-date.

To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token. Details on getting set up with all three are available at CommunityData:Hyak setup.

There are a number of other sources of documentation:


 * Slides from the UW HPC Club
 * Hyak User Documentation

Setting up SSH
When you connect to SSH, it will ask you for a key from your token. Typing this in every time you start a connection be a pain. One approach is to create an .ssh config file that will create a "tunnel" the first time you connect and send all subsequent connections to Hyak over that tunnel. Some details in the Hyak documentation.

I've added the following config to the file  on my laptop (you will want to change the username):

Host hyak-mox mox2.hyak.uw.edu User sdg1 HostName mox2.hyak.uw.edu ControlPath ~/.ssh/master-%r@%h:%p ControlMaster auto ControlPersist yes Compression yes

If your SSH connection becomes stale or disconnected (e.g., if you change networks) it may take some time for the connection to time out. Until that happens, any connections you make to hyak will silently hang. If your connections to ssh hyak are silently hanging but your Internet connection seems good, look for ssh processes running on your local machine with:

ps ax|grep hyak

If you find any, kill them with. Once that is done, you should have no problem connecting to Hyak.

Connecting to Hyak
To connect to Hyak, you now only need to do:

ssh hyak-mox

It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a 2-factor authentication request.

Setting Up Hyak
When setting up Hyak, you must first add this stanza to the very bottom of your  file. Generally, you can simply edit the following file on Hyak:

The final line is particularly important. If you do not do this, the files you create on Hyak will be able to be read or written by others in the group!

Once you do this, you will need to restart bash. This can be done simply by logging out and then logging back in or by restarting bash with the command.

X11 forwarding
You may also want to add these two lines to your Hyak .ssh/config:

ForwardX11 yes ForwardX11Trusted yes

These lines will mean that if you have "checked out" an interactive machine, you can ssh from your computer to Hyak and then directly through an addition hop to the machine (like ssh n0652). Those ForwardX11 lines means if you graph things on this session, they will open on your local display.

Moving files from ikt to mox
You can copy files at high speed without a password between the Hyak systems using commands like the ones below (instructions from the Hyak documentation).

From ikt to mox

ikt1$ hyakbbcp myfile mox1.hyak.uw.edu:/gscratch/comdata/users/YOUR_ID/YOUR_DIR ikt1$ hyakbbcp -r mydirectory mox1.hyak.uw.edu:/gscratch/comdata/users/YOUR_DIR

From mox to ikt

mox1$ hyakbbcp myfile ikt1.hyak.uw.edu:/com/users/YOUR_DIR mox1$ hyakbbcp -r mydirectory ikt1.hyak.uw.edu:/com/users/YOUR_DIR

Running Jobs on Hyak
When you first log in to Hyak, you will be on a "login node". These are nodes that have access to the Internet, and can be used to update code, move files around, etc. They should not be used for computationally intensive tasks. To actually run jobs, there are a few different options, described in detail in the Hyak User documentation. Following are basic instructions for some common use cases.

Interactive nodes
Interactive nodes are systems where you get a  shell from which you can run your code. This mode of operation is conceptually similar to running your code on your own computer, the difference being that you have access to much more CPU and memory. To check out an interactive node, run the  or   command from your login shell. Before running these commands, you will want to be in a  or   session so that you can start your job, and log off without having to worry about your job getting terminated.

At a given point of time, unless you are using the  (formerly the  ) queue, you can have one instance of   and three instances of   running at the same time. You may need to coordinate over IRC if you need to use a specific node for any reason.

Killing jobs on compute nodes
The Slurm scheduler provides a command called scancel to terminate jobs. For example, you might run queue_state from a login node to figure out the ID number for your job (let's say it's 12345), then run scancel --signal=TERM 12345 to send a SIGTERM signal or scancel --signal=KILL 12345 to send a SIGKILL signal that will bring job 12345 to an end.

Parallel R
The nodes on Hyak have 28 CPU cores. These may help in speeding up your analysis significantly. If you are using R functions such as, there are parallelized equivalents (e.g.  ) which can take advantage of all the cores and give you a 2800% boost! However, something to be aware of here is your code's memory requirement—if you are running 28 processes in parallel, your memory needs can also go up to 28x, which may be more than the ~200GB that the  node will have. In such cases, you may want to dial down the number of CPU cores being used—a way to do that globally in your code is to run the following snippet of code before calling any of the parallelized functions.

More information on parallelizing your R code can be found in the package documentation.

R packages
To install a R package that's not available globally, you can check out a build node, and install the package locally. Here's how to do it:

This will start R, where you can install a package in the usual way. The build node has access to the Internet, so it will be able to download the required source packages, etc.

Python Packages
DO NOT TRUST THIS SECTION. Intel python appears to have some issues.

The recommended python to use on hyak is the intel-python. This is a customized anaconda distribution with a magical optimization of python that really increases the performance of numpy.

Using an anaconda python distribution has important implications for how you install packages. While in normal python, you would install python packages using `pip`, when you use an anaconda distribution you should use `conda` to install packages. Conda also has some fancy features like virtual environments for using different versions of python or different versions of packages in different projects. The problem with using conda is that it does not include all the packages you might want to use. If you want to install a python package that is missing from conda, you can use pip.

Importantly, when using intel-python, you should prefer to install software using conda over pip.

Conda Documentation Pip Documentation

The first time you use intel-python you need to create a custom environment for installing software:

conda create -n my_root

Then add the following to your .bashrc to use this environment. if [ -z $(conda info --env | grep my_root | grep \*) ]; then source activate my_root fi

Conda doesn't like it when you try to activate an environment that is already active. T

Conda modifies your prompt in a possibly annoying way. To disable this behavior run the command: $ conda config --set changeps1 False

Custom modules
Software on Hyak can be outdated, or in some cases, not available at all. In some of these situations, it may be possible to use environment modules to install and run software without necessitating administrative (root) privileges. For example, it is possible to have and run the newest version of R that is installed in a central, shared directory, and it is even possible to have multiple versions of R available in parallel. The following subsection shows how to do this. Ordinarily, this should not be necessary on a day-to-day basis.

Installing and making available a custom module
If you are using  to run and manage your builds, keep in mind that   drops a few environment variables such as , which may mess up your build process. You should check that all the relevant environment variables are set before starting your build.

The first step toward installing and making available a custom module (in this case, R 3.5.0) is to spin up the build node, download R, compile it with a specific prefix, and install it.

The  option to   tells the build scripts that R is going to be installed in. This follows a convention that we picked—software in modules should go into. The  option is the most important flag for  —any other flag or option will be specific to the software being installed.

The second step is to write a. This contains the metadata about our module. Edit the file  to contain the following

Note that the filename follows a similar convention as  earlier. This file sets up the  and   environment variables appropriately so that the specified version of R can be accessed and run as needed. There are many more directives that can go into the —see   for details on those directives.

Once this file is written out, the  command should list   as an available module. This is because the module system is set up to look inside  for module files, thanks to the   variable that is set through. The command  should make R available and ready for use. To avoid running  whenever you log in, you can add the command at the end of your   file (after the section that sets  ).

Spack
To use spack to manage software on hyak, add the following to your .bashrc.

For directions on working with spack, see the spack documentation.