CommunityData:Hyak Ikt (Deprecreated): Difference between revisions

From CommunityData
(→‎Common Troubles and How to Solve Them: move over to the main page)
 
(43 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{notice|This page describes using Hyak via ''Ikt'' which was the version of Hyak we used from 2014 through May 2020 when it was deprecated and replaced with a new version Hyak called ''Mox''. Up-to-date information on using Hyak is online at [[CommunityData:Hyak]].}}
To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token. Details on getting set up with all three are available at [[CommunityData:Hyak setup]].
To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token. Details on getting set up with all three are available at [[CommunityData:Hyak setup]].


Line 5: Line 7:
* [http://students.washington.edu/hpcc/using-hyak/information-for-beginner-users/slides-from-training-sessions/ Slides from the UW HPC Club]
* [http://students.washington.edu/hpcc/using-hyak/information-for-beginner-users/slides-from-training-sessions/ Slides from the UW HPC Club]
* [http://wiki.hyak.uw.edu Hyak User Documentation]
* [http://wiki.hyak.uw.edu Hyak User Documentation]
* [[CommunityData:Hyak (Advanced)|Advanced Hyak]]
* From Summer 2019: [[CommunityData:Hyak tutorial | Hyak Tutorial]]


== Setting up SSH ==  
== Setting up SSH ==  
Line 12: Line 16:
I've added the following config to the file <code>~/.ssh/config</code> on my laptop (you will want to change the username):
I've added the following config to the file <code>~/.ssh/config</code> on my laptop (you will want to change the username):


  Host hyak ikt.uw.edu
  Host ikt hyak
     User makohill
     User makohill
     HostName ikt2.hyak.uw.edu
     HostName login2.hyak.washington.edu
     ControlPath ~/.ssh/master-%r@%h:%p
     ControlPath ~/.ssh/master-%r@%h:%p
     ControlMaster auto
     ControlMaster auto
Line 36: Line 40:
It will prompt you for your UWNetID's password and your PRN which is the little number that comes from your token.
It will prompt you for your UWNetID's password and your PRN which is the little number that comes from your token.


== Setting Up Hyak ==
== Setting up your user's Hyak environment with CDSC tools ==


When setting up Hyak, you must first add these two stanzas to '''very top''' and the ''very bottom'' of your <code>~/.bashrc</code> file.  Generally, you can simply edit the following file on Hyak: <code>~/.bashrc</code>
When setting up Hyak, you must first add these two stanzas to '''very top''' and the ''very bottom'' of your <code>~/.bashrc</code> file.  Generally, you can simply edit the following file on Hyak: <code>~/.bashrc</code>


  ## BEGIN hyak-cdsc specific options -- TOP OF FILE
  ## hyak-cdsc specific options -- TOP OF FILE
  source /com/gentoo/etc/profile
  source /com/gentoo/etc/profile
  ## END hyak-cdsc specific options -- TOP OF FILE
  ## end hyak-cdsc specific options -- TOP OF FILE


  ## BEGIN hyak-cdsc specific options -- BOTTOM OF FILE
  ## BEGIN hyak-cdsc specific options -- BOTTOM OF FILE
  source /etc/profile.d/modules.sh
  source /etc/profile.d/modules.sh
source /etc/profile.d/moab.sh
  module load parallel_sql
  module load parallel_sql
   
   
  alias big_machine='qsub -W group_list=hyak-mako -l walltime=500:00:00,mem=200gb -I'
alias int_machine='srun -p comdata-int --time=500:00:00 --mem=200G --pty /bin/bash'
  alias any_machine='qsub -W group_list=hyak-mako -l walltime=500:00:00,mem=100gb -I'
  alias big_machine='srun -p comdata --time=500:00:00 --mem=200G --pty /bin/bash'
  alias build_machine='qsub -I -q build -l walltime=8:00:00'
  alias any_machine='srun -p comdata --time=500:00:00 --mem=100G --pty /bin/bash'
  alias build_machine='srun -p build --time=8:00:00 --mem=10G --pty /bin/bash'
  alias rgrep='grep -r'  
  alias rgrep='grep -r'  
   
   
Line 78: Line 82:
If you need python libraries that are not installed in the shared environment:
If you need python libraries that are not installed in the shared environment:


pip3 install --user YOURLIBHERE
$ pip3 install --user YOURLIBHERE


...replacing YOURLIBHERE with the name of the library you need, e.g. 'pandas'. The --user option will install it for just you.
...replacing YOURLIBHERE with the name of the library you need, e.g. 'pandas'. The --user option will install it for just you.


== Parallel R ==
If you have a lot of dependencies for a specific project, consider using [[#Python Virtual Environments |Python Virtual Environments]]
 
The hyak machines have 16 cpu cores.  The Mox machines will have 28! Running your program on all the cores can speed things up a lot! We make heavy use of R for building datasets and for fitting models. Like most programming languages, R uses only one cpu by default. However, for typical computation-heavy data science tasks it is pretty easy to make R use all the cores.
 
For fitting models, the R installed in Gentoo should use all cores automatically. This is thanks to OpenBlas, which is a numerical library that implements and parallelizes linear algebra routines like matrix factorization, matrix inversion, and other operations that bottleneck model fitting.
 
However, for building datasets, you need to do a little extra work. One common strategy is to break up the data into independent chunks (for example, when building wikia datasets there is one input file for each wiki) and then use <code>mcapply</code> from <code>library(parallel)</code> to build variables from each chunk. Here is an example:
 
    library(parallel)
    options(mc.cores=detectCores())  ## tell R to use all the cores
   
    mcaffinity(1:detectCores()) ## required and explained below
 
    library(data.table) ## for rbindlist, which concatenates a list of data.tables into a single data.table
   
    ## imagine defining a list of wikis to analyze
    ## and a function to build variables for each wiki
    source("wikilist_and_buildvars")
   
    dataset <- rbindlist(mclapply(wikilist,buildvars))
   
    mcaffinity(rep(1,detectCores())) ## return processor affinities to the status preferred by OpenBlas
 
A working example can be found in the [[Message Walls]] git repository.
 
<code>mcaffinity(1:detectCores())</code> is required for the gentoo R <code>library(parallel)</code> to use multiple cores. The reason is technical and has to do with OpenBlas. Essentially, OpenBlas changes settings that govern how R assigns processes to cores. OpenBlas wants all processes assigned to the same core, so that the other cores do not interfere with it's fancy multicore linear algebra. However, when building datasets, the linear algebra is not typically the bottleneck. The bottleneck is instead operations like sorting and merging that OpenBlas does not parallelize.
 
The important thing to know is that if you want to use mclapply, you need to do <code>mcaffinity(1:detectCores())</code>. If you want to then fit models you should do <code>mcaffinity(rep(1,detectCores())</code> so that OpenBlas can do its magic.
 
== Jupyter Notebook on Hyak ==
 
1. Choose a number you are going to use as a port. We should each use a different port and the number should be between 1000 and 65000. It doesn't matter what it is but it needs to be unique. Pick something unique. In the following instructions, replace '''$PORT''' with your number below.
 
2. Connect to Hyak and forward the the port from you local machine to the new one:
 
ssh -L localhost:'''$PORT''':localhost:'''$PORT''' '''username'''@hyak.washington.edu
 
You can also add the following line to the Hyak section on your local .ssh/config file on your laptop:
 
    LocalForward '''$PORT''' localhost:'''$PORT'''
 
3. We're going to need to connect to one of the compute servers ''twice''. As a result, we'll use a program called <code>tmux</code>. Tmux is very similar (but a little easier to learn) than a program called <code>screen</code>. If you know screen, just use that. Otherwise, run tmux like:
tmux
 
You can tell you're in tmux because of the green line at the bottom of the screen.
 
4. "Check out" a compute node
 
any_machine
 
5.
 
Keep track of which machine you are on. It should be something like '''n0650''' and it should be displayed on the prompt. We'll refer to it as '''$HOST''' below.
 
6. Start jupyter on the compute node:
 
jupyter-notebook --no-browser --port='''$PORT'''
 
You'll see that jupyter just keeps running in the background. This can be useful because when there are errors, they will sometimes be displayed in this terminal. Generally, you can just ignore this though.
 
6. Create a new window in tmux/screen
 
At this point, you have jupyter running on the compute node on $PORT. You also will have forwarded the port from your laptop to the login node. We're really only missing one thing which is the tunnel from the login node to the compute node within hyak. To do this, we'll create a new window inside tmux with the keystroke '''Ctrl-b c'''.
 
If you're not familiar with it, you'll want to read the [[CommunityData:tmux]] which includes a quick cheatsheet. To switch back to the original window running jupyter, you should type: '''Ctrl-b 0'''. If you switch though, be sure to switch back to the new window with '''Ctrl-b 1'''.
 
Because you originally ran tmux on the login node, the new window/terminal will be opened within tmux on the login node.
 
7. Open a tunnel from the login node to the compute node.
 
  ssh -L localhost:'''$PORT''':localhost:'''$PORT''' '''$HOST'''
 
8. In your local browser, localhost:'''$PORT'''


== Set up a password for Jupyter Notebook on Hyak ==
== Set up a password for Jupyter Notebook on Hyak ==
Line 166: Line 98:


== Running Jobs on Hyak ==  
== Running Jobs on Hyak ==  
{{notice|This material is now out of date! It refers to the old version of the Hyak scheduler.}}


<div style="width: 300px; float: right; border: 1px solid black; background: #DDD; padding: 0.5em;">
<div style="width: 300px; float: right; border: 1px solid black; background: #DDD; padding: 0.5em;">
'''Screencast Examples:'''
'''Screencast Examples (Sep, 2019):'''
 
* Using parallel and batch jobs on ikt: [https://communitydata.cc/~mako/hyak_example_day2-20190906.ogv Video]


* Interactive job: [https://communitydata.cc/~mako/hyak_example_interactive_job-20180215-part_1.ogv Part 1], [https://communitydata.cc/~mako/hyak_example_interactive_job-20180215-part_2.ogv Part 2]
'''Screencast Examples (Feb, 2018, pre-SLURM):'''
 
* Interactive job (ikt): [https://communitydata.cc/~mako/hyak_example_interactive_job-20180215-part_1.ogv Part 1], [https://communitydata.cc/~mako/hyak_example_interactive_job-20180215-part_2.ogv Part 2]
* Batch Job (ikt): [https://communitydata.cc/~mako/hyak_example_batch_job-20180517.ogv Video]
</div>
</div>


Line 184: Line 123:


The basic workflow is:
The basic workflow is:
0. Be empowered to run parallel_sql -- the first time you use parallel_sql, you will need to:
  login$ module load parallel_sql
  login$ sudo pssu --initial
  [sudo] password for USERID: <Enter your UW NetID password>
See more information at: [[https://wiki.cac.washington.edu/display/hyakusers/Hyak+parallel-sql]]. If you're not initialized, it'll say "Cannot read database config file '/usr/lusers/<<your username>>/.parallel/db.conf': No such file or directory' when you try.


1. Prepare the code, and test it with a single file (either on your computer, or on an interactive node).
1. Prepare the code, and test it with a single file (either on your computer, or on an interactive node).
2. Write a job_script file. This tells the node what job to run. There is an example on the Parallel SQL wiki page (linked above), and an example in the wikiresearch/hyak_example directory.
2. Write a job_script file. This tells the node what job to run. There is an example on the Parallel SQL wiki page (linked above), and an example in the wikiresearch/hyak_example directory.
3. Create a task_list file. This is a list of commands that should be run, with one line per file that the command should operate on. An example file might look something like:
3. Create a task_list file. This is a list of commands that should be run, with one line per file that the command should operate on. An example file might look something like:


Line 209: Line 157:
  $ qsub job_script -t 0-N
  $ qsub job_script -t 0-N
  # N is the number of nodes
  # N is the number of nodes
For producing your task_list file, you might find it useful to make a python script that slurps up a list of files from a dir and then inserts those filenames into a command file to be run repeatedly:
#!/usr/bin/env python3
import glob
outfile = "many_Redir_Runs.txt"
infileDir = "/com/raw_data/complete_wmf_dumps-20180220/enwiki-20180301/"
fileList = glob.glob(infileDir + "enwiki-20180301-pages-meta-history*.7z") #get all the 7z metahistory files
with open(outfile, 'w') as outFileHandle:
    for file in fileList:
        cleanFile = file.split("/")[-1]
        commandString = "7za x -so " + file + "| python ./01-extract_redirects.py > output/redir/" + cleanFile + ".tsv \n"
        outFileHandle.write(commandString)


=== R Markdown ===
=== R Markdown ===
Line 216: Line 178:
  $ Rscript -e "rmarkdown::render('analysis.Rmd')"
  $ Rscript -e "rmarkdown::render('analysis.Rmd')"


=== Python Virtual Environments ===


=== Killing jobs on compute nodes ===
Python virtual environments are a great way to manage project dependencies, and they seem to work on HYAK in the same way that they do on local machines. First install virtualenv using pip (this only needs to be done once).
 
Torque documentation suggests that you should do this with <tt>qdel</tt>. That might work, but apparently our system runs moab on top of torque and the recommended (by Hyak admins) way to kill a job is to use the <tt>mjobctl</tt> command.


For example, you might run <tt>nodestate</tt> from a login node to figure out the ID number for your job (let's say it's 12345), then run <tt>mjobctl -c 12345</tt> to send a SIGTERM signal or <tt>mjobctl -F 12345</tt> to send a SIGKILL signal that will bring job 12345 to an end.
$ pip install virtualenv --user


Note that only four user accounts at a time can have the bits necessary to kill other people's jobs, so while you can do this on your own jobs, you'll need to bother the IRC channel to find help cancelling other's jobs (we think that Jeremy, Nate, Aaron, and Mako currently have the bits). Also, check out the [http://docs.adaptivecomputing.com/maui/commands/mjobctl.php documentation for mjobctl] for more info.
Initialize a new virtual environment in the current directory.  Many people create a new virtual environment for each project.


== Working on Hyak from a local emacs client ==
  $ # this virtual environment will use python 3
Some of us (like Nate) rely heavily on the Emacs text editor. [http://ess.r-project.org/| Emacs speaks statistics] is a powerful emacs mode for
  $ virtualenv venv -p python3
programming in R and doing data analysis. There are a few options for using Emacs on hyak. If you open emacs on an interactive node with X-forwarding enabled then you will get a nice graphical emacs window and plots you make will be displayed on your screen. But if you disconnect from Hyak you will lose your R session. This makes running emacs the normal way on an interactive node unsuitable for fitting models.  Another  disadvantage is that your will be working with an x-forwarded emacs and so will not look as nice or be as responsive as your local emacs.


Alternatively, you might run emacs in console mode in tmux. Then Hyak will keep running your R process even when you log out. The downsides here is that you can't view plots on your display (you could save them as a pdf, and then open the pdf on your local machine) and that some emacs key chords will collide with tmux key bindings and configuring tmux to fix this is a pain.
To activate the virtual environment from a login node or an interactive compute node


A better way is to run emacs server on a compute node on hyak and then open a local emacs client that connects to that server.  
  $ source <path_to_venv_parent_dir>/venv/bin/activate


=== Instructions For ESS ===
To load a virtual environment in parallel sql, add the following to your PBS bash script
''Unfortunately, this requires running emacsserver on a login node and viewing plots does not work. These problems should go away if hyak let us forward X from a compute node and tunnel it through a login node. This doesn't seem to work as ssh -X n0649 doesn't seem to forward X.''


1. Open tmux on a ''login node'' and start emacsserver.
source <path_to_venv_parent_dir>/venv/bin/activate
  $ tmux
  $ emacs --daemon


2. Still in tmux, start an interactive session
=== Killing jobs on compute nodes ===
  $ any_machine


3. In a new terminal (not tmux) ssh into the login node and start an emacs client (-c means in a new window).  
Torque documentation suggests that you should do this with <tt>qdel</tt>. That might work, but apparently our system runs moab on top of torque and the recommended (by Hyak admins) way to kill a job is to use the <tt>mjobctl</tt> command.
  $ emacsclient -c


4. In this emacsclient open a shell, ssh to the compute node, and start an R process.  
For example, you might run <tt>nodestate</tt> from a login node to figure out the ID number for your job (let's say it's 12345), then run <tt>mjobctl -c 12345</tt> to send a SIGTERM signal or <tt>mjobctl -F 12345</tt> to send a SIGKILL signal that will bring job 12345 to an end.
  M-x shell
  $ ssh n0649


5. With focus on the R process buffer in emacs, connect ESS to the R process.  
Note that only four user accounts at a time can have the bits necessary to kill other people's jobs, so while you can do this on your own jobs, you'll need to bother the IRC channel to find help cancelling other's jobs (we think that Jeremy, Nate, Aaron, and Mako currently have the bits). Also, check out the [http://docs.adaptivecomputing.com/maui/commands/mjobctl.php documentation for mjobctl] for more info.
  M-x ess-remote

Latest revision as of 00:32, 20 May 2020

This page describes using Hyak via Ikt which was the version of Hyak we used from 2014 through May 2020 when it was deprecated and replaced with a new version Hyak called Mox. Up-to-date information on using Hyak is online at CommunityData:Hyak.

To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token. Details on getting set up with all three are available at CommunityData:Hyak setup.

There are a number of other sources of documentation:

Setting up SSH[edit]

When you connect to SSH, it will ask you for a key from your token. Typing this in every time you start a connection be a pain. One approach is to create an .ssh config file that will create a "tunnel" the first time you connect and send all subsequent connections to Hyak over that tunnel. Some details in the Hyak documentation.

I've added the following config to the file ~/.ssh/config on my laptop (you will want to change the username):

Host ikt hyak
   User makohill
   HostName login2.hyak.washington.edu
   ControlPath ~/.ssh/master-%r@%h:%p
   ControlMaster auto
   ControlPersist yes
   ForwardX11 yes
   ForwardX11Trusted yes
   Compression yes

ONE WARNING: If your SSH connection becomes stale or disconnected (e.g., if you change networks) it may take some time for the connection to time out. Until that happens, any connections you make to hyak will silently hang. If your connections to ssh hyak are silently hanging but your Internet connection seems good, look for ssh processes running on your local machine with:

ps ax|grep hyak

If you find any, kill them with kill <PROCESSID>. Once that is done, you should have no problem connecting to Hyak.

Connecting to Hyak[edit]

To connect to Hyak, you now only need to do:

ssh hyak

It will prompt you for your UWNetID's password and your PRN which is the little number that comes from your token.

Setting up your user's Hyak environment with CDSC tools[edit]

When setting up Hyak, you must first add these two stanzas to very top and the very bottom of your ~/.bashrc file. Generally, you can simply edit the following file on Hyak: ~/.bashrc

##  hyak-cdsc specific options -- TOP OF FILE
source /com/gentoo/etc/profile
##  end hyak-cdsc specific options -- TOP OF FILE
## BEGIN hyak-cdsc specific options -- BOTTOM OF FILE
source /etc/profile.d/modules.sh
module load parallel_sql

alias int_machine='srun -p comdata-int --time=500:00:00 --mem=200G --pty /bin/bash'
alias big_machine='srun -p comdata --time=500:00:00 --mem=200G --pty /bin/bash'
alias any_machine='srun -p comdata --time=500:00:00 --mem=100G --pty /bin/bash'
alias build_machine='srun -p build --time=8:00:00 --mem=10G --pty /bin/bash'
alias rgrep='grep -r' 

MC_CORES=16
PATH="/com/local/bin:/sw/local/bin:$PATH"
R_LIBS_USER="~/R"

umask 007
## END hyak-cdsc specific options -- BOTTOM OF FILE


These are new as of November 30, 2017. As a result, you must completely remove the old environment variables, and such. They include material that will screw things up. The final line is particularly important. If you do not do this, the files you create on Hyak will be able to be read or written by others in the group!

Once you do this, you will need to restart bash. This can be done simply by logging out and then logging back in or by restarting bash with the command exec bash.

I also add these two lines to my Hyak .ssh/config:

ForwardX11 yes
ForwardX11Trusted yes

These lines will mean that if I have "checked out" an interactive machine, I can ssh from my computer to Hyak and then directly through an addition hop to the machine (like ssh n0652). Those ForwardX11 lines means if I graph things on this window, they will open on my local display.

Python Packages[edit]

If you need python libraries that are not installed in the shared environment:

$ pip3 install --user YOURLIBHERE

...replacing YOURLIBHERE with the name of the library you need, e.g. 'pandas'. The --user option will install it for just you.

If you have a lot of dependencies for a specific project, consider using Python Virtual Environments

Set up a password for Jupyter Notebook on Hyak[edit]

Once you have IPython/Jupyter up and running on Hyak and have set up all the port forwarding stuff described above, you might consider adding a password to secure your Jupyter session. Why bother? Anyone with access to Hyak can see that you're forwarding something via the login node. While unlikely, they may do something to interrupt or otherwise mess with your session. It should work. Keep in mind that anyone with access to your jupyter session can do anything you can do on the command line including access all your data, delete files, etc.

Instructions for setting up a password on your Jupyter sessions are available on the Hyak wiki (UW login required).

Note that you can/should skip the first command that loads the canopy module.

Running Jobs on Hyak[edit]

This material is now out of date! It refers to the old version of the Hyak scheduler.

Screencast Examples (Sep, 2019):

  • Using parallel and batch jobs on ikt: Video

Screencast Examples (Feb, 2018, pre-SLURM):

When you first log in to Hyak, you will be on a "login node". These are nodes that have access to the Internet, and can be used to update code, move files around, etc. They should not be used for computationally intensive tasks. To actually run jobs, there are a few different options, described in detail in the itSigs documentation. Following are basic instructions for some common use cases.

Interactive nodes[edit]

For simple tasks, e.g. running R on a dataset, testing that code is working, etc. it is easiest to run it in an interactive node. This is a compute node that you interact with through the terminal. All of your disk storage is accessible just as though you were on the login node.

Parallel SQL[edit]

For big jobs you will want to use multiple nodes. Hyak has a very cool tool that makes this very easy, called Parallel SQL. Detailed instructions are in the itsigs parallel-sql documentation. There is also a full walkthrough example with instructions.

The basic workflow is:

0. Be empowered to run parallel_sql -- the first time you use parallel_sql, you will need to:

  login$ module load parallel_sql
  login$ sudo pssu --initial
  [sudo] password for USERID: <Enter your UW NetID password>

See more information at: [[1]]. If you're not initialized, it'll say "Cannot read database config file '/usr/lusers/<<your username>>/.parallel/db.conf': No such file or directory' when you try.

1. Prepare the code, and test it with a single file (either on your computer, or on an interactive node).

2. Write a job_script file. This tells the node what job to run. There is an example on the Parallel SQL wiki page (linked above), and an example in the wikiresearch/hyak_example directory.

3. Create a task_list file. This is a list of commands that should be run, with one line per file that the command should operate on. An example file might look something like:

python analysis_script.py -i ./input/wiki_1.tsv -o ./output/wiki_1_analysis.tsv
python analysis_script.py -i ./input/wiki_2.tsv -o ./output/wiki_2_analysis.tsv
...

The README in the hyak_example directory has some example bash commands that you might use to generate this file.

4. Load the task_list into Parallel SQL.

$ module load parallel_sql
$ cat task_list | psu --load

5. Run the job_script on as many nodes as you need. When each task is finished, the node will get the next task from Parallel SQL.

$ for job in $(seq 1 N); do qsub job_script; done 
# N is the number of nodes

You can also use the -t flag, which makes jobs using multiple nodes easier to kill, but is not recommended by "the HYAK people".

$ qsub job_script -t 0-N
# N is the number of nodes


For producing your task_list file, you might find it useful to make a python script that slurps up a list of files from a dir and then inserts those filenames into a command file to be run repeatedly:

#!/usr/bin/env python3
import glob
outfile = "many_Redir_Runs.txt"
infileDir = "/com/raw_data/complete_wmf_dumps-20180220/enwiki-20180301/"
fileList = glob.glob(infileDir + "enwiki-20180301-pages-meta-history*.7z") #get all the 7z metahistory files
with open(outfile, 'w') as outFileHandle:
   for file in fileList:
       cleanFile = file.split("/")[-1]
       commandString = "7za x -so " + file + "| python ./01-extract_redirects.py > output/redir/" + cleanFile + ".tsv \n"
       outFileHandle.write(commandString)

R Markdown[edit]

R markdown is a useful way of writing up your analysis as a mix of explanatory text and code. You can, for example, create fancy Tufte-style handouts with code and explanatory text in the same file! In order to use R markdown, in a compute node, run the following command

$ Rscript -e "rmarkdown::render('analysis.Rmd')"

Python Virtual Environments[edit]

Python virtual environments are a great way to manage project dependencies, and they seem to work on HYAK in the same way that they do on local machines. First install virtualenv using pip (this only needs to be done once).

$ pip install virtualenv --user

Initialize a new virtual environment in the current directory. Many people create a new virtual environment for each project.

$ # this virtual environment will use python 3
$ virtualenv venv -p python3

To activate the virtual environment from a login node or an interactive compute node

$ source <path_to_venv_parent_dir>/venv/bin/activate

To load a virtual environment in parallel sql, add the following to your PBS bash script

source <path_to_venv_parent_dir>/venv/bin/activate

Killing jobs on compute nodes[edit]

Torque documentation suggests that you should do this with qdel. That might work, but apparently our system runs moab on top of torque and the recommended (by Hyak admins) way to kill a job is to use the mjobctl command.

For example, you might run nodestate from a login node to figure out the ID number for your job (let's say it's 12345), then run mjobctl -c 12345 to send a SIGTERM signal or mjobctl -F 12345 to send a SIGKILL signal that will bring job 12345 to an end.

Note that only four user accounts at a time can have the bits necessary to kill other people's jobs, so while you can do this on your own jobs, you'll need to bother the IRC channel to find help cancelling other's jobs (we think that Jeremy, Nate, Aaron, and Mako currently have the bits). Also, check out the documentation for mjobctl for more info.