Editing CommunityData:Hyak Ikt (Deprecreated)

From CommunityData

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
{{notice|This page describes using Hyak via ''Ikt'' which was the version of Hyak we used from 2014 through May 2020 when it was deprecated and replaced with a new version Hyak called ''Mox''. Up-to-date information on using Hyak is online at [[CommunityData:Hyak]].}}
To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token. Details on getting set up with all three are available at [[CommunityData:Hyak setup]].
To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token. Details on getting set up with all three are available at [[CommunityData:Hyak setup]].
There are a number of other sources of documentation:
* [http://students.washington.edu/hpcc/using-hyak/information-for-beginner-users/slides-from-training-sessions/ Slides from the UW HPC Club]
* [http://wiki.hyak.uw.edu Hyak User Documentation]
* [[CommunityData:Hyak (Advanced)|Advanced Hyak]]
* From Summer 2019: [[CommunityData:Hyak tutorial | Hyak Tutorial]]


== Setting up SSH ==  
== Setting up SSH ==  


When you connect to SSH, it will ask you for a key from your token. Typing this in every time you start a connection be a pain. One approach is to create an .ssh config file that will create a "tunnel" the first time you connect and send all subsequent connections to Hyak over that tunnel. Some details [http://wiki.cac.washington.edu/display/hyakusers/Logging+In in the Hyak documentation].
When you connect to SSH, it will ask you for a key from your token. Typing this in every time you start a connection be a pain. One approach is to create an .ssh config file that will create a "tunnel" the first time you connect and send all subsequent connections to Hyak over that tunnel. Some details [https://sig.washington.edu/itsigs/Logging_In#SSH_Config_File in the Hyak documentation].


I've added the following config to the file <code>~/.ssh/config</code> on my laptop (you will want to change the username):
I've added the following config to the file <code>~/.ssh/config</code> on my laptop (you will want to change the username):


  Host ikt hyak
  Host hyak hyak.washington.edu
     User makohill
     User makohill
     HostName login2.hyak.washington.edu
     HostName login3.hyak.washington.edu
     ControlPath ~/.ssh/master-%r@%h:%p
     ControlPath ~/.ssh/master-%r@%h:%p
     ControlMaster auto
     ControlMaster auto
Line 40: Line 31:
It will prompt you for your UWNetID's password and your PRN which is the little number that comes from your token.
It will prompt you for your UWNetID's password and your PRN which is the little number that comes from your token.


== Setting up your user's Hyak environment with CDSC tools ==
== Setting Up Hyak ==


When setting up Hyak, you must first add these two stanzas to '''very top''' and the ''very bottom'' of your <code>~/.bashrc</code> file.  Generally, you can simply edit the following file on Hyak: <code>~/.bashrc</code>
When setting up Hyak, you must first add this to your BASHRC file.  Generally, you can simply edit the following file on Hyak: <code>~/.bashrc</code>


  ##  hyak-cdsc specific options -- TOP OF FILE
  ##  hyak specific options
  source /com/gentoo/etc/profile
  alias rgrep='grep -r'
  ##  end hyak-cdsc specific options -- TOP OF FILE
  alias big_machine='qsub -W group_list=hyak-mako -l walltime=500:00:00,mem=200gb -I'
 
  alias any_machine='qsub -W group_list=hyak-mako -l walltime=500:00:00,mem=100gb -I'
  ## BEGIN hyak-cdsc specific options -- BOTTOM OF FILE
  PYTHON_PATH="/com/local/lib/python3.5:$PYTHON_PATH"
  source /etc/profile.d/modules.sh
LD_LIBRARY_PATH="/com/local/lib:/com/local/lib64/R/lib:${LD_LIBRARY_PATH}"
PKG_CONFIG_PATH=/com/local/lib/pkgconfig:/usr/share/pkgconfig
MC_CORES=16
PATH="/com/local/bin:$PATH"
  module load parallel_sql
  module load parallel_sql
   
  module load contrib/gcc_5.1.0-openmpi_1.10.1
alias int_machine='srun -p comdata-int --time=500:00:00 --mem=200G --pty /bin/bash'
alias big_machine='srun -p comdata --time=500:00:00 --mem=200G --pty /bin/bash'
alias any_machine='srun -p comdata --time=500:00:00 --mem=100G --pty /bin/bash'
alias build_machine='srun -p build --time=8:00:00 --mem=10G --pty /bin/bash'
alias rgrep='grep -r'
MC_CORES=16
PATH="/com/local/bin:/sw/local/bin:$PATH"
R_LIBS_USER="~/R"
  umask 007
  umask 007
## END hyak-cdsc specific options -- BOTTOM OF FILE


These are new as of '''November 30, 2017.''' As a result, '''you must completely remove the old environment variables, and such. They include material that will screw things up.'''
The final line is particularly important. If you do not do this, the files you create on Hyak will be able to be read or written by others in the group!
The final line is particularly important. If you do not do this, the files you create on Hyak will be able to be read or written by others in the group!


Line 78: Line 59:
These lines will mean that if I have "checked out" an interactive machine, I can ssh from my computer to Hyak and then directly through an addition hop to the machine (like ssh n0652). Those ForwardX11 lines means if I graph things on this window, they will open on my local display.
These lines will mean that if I have "checked out" an interactive machine, I can ssh from my computer to Hyak and then directly through an addition hop to the machine (like ssh n0652). Those ForwardX11 lines means if I graph things on this window, they will open on my local display.


=== Python Packages ===
== Jupyter Notebook on Hyak ==


If you need python libraries that are not installed in the shared environment:
1. Choose a number you are going to use as a port. We should each use a different port and the number should be between 1000 and 65000. It doesn't matter what it is but it needs to be unique. Pick something unique. In the following instructions, replace '''$PORT''' with your number below.


$ pip3 install --user YOURLIBHERE
2. Connect to Hyak and forward the the port from you local machine to the new one:


...replacing YOURLIBHERE with the name of the library you need, e.g. 'pandas'. The --user option will install it for just you.
ssh -L localhost:'''$PORT''':localhost:'''$PORT'' '''username'''@hyak.washington.edu


If you have a lot of dependencies for a specific project, consider using [[#Python Virtual Environments |Python Virtual Environments]]
You can also add the following line to the Hyak section on your local .ssh/config file on your laptop:


== Set up a password for Jupyter Notebook on Hyak ==
    LocalForward '''$PORT''' localhost:'''$PORT'''
 
3. We're going to need to connect to one of the compute servers ''twice''. As a result, we'll use a program called <code>tmux</code>. Tmux is very similar (but a little easier to learn) than a program called <code>screen</code>. If you know screen, just use that. Otherwise, run tmux like:
tmux


Once you have IPython/Jupyter up and running on Hyak and have set up all the port forwarding stuff described above, you might consider adding a password to secure your Jupyter session. Why bother? Anyone with access to Hyak can see that you're forwarding ''something'' via the login node. While unlikely, they may do something to interrupt or otherwise mess with your session.
You can tell you're in tmux because of the green line at the bottom of the screen.
It should work. Keep in mind that anyone with access to your jupyter session can do anything you can do on the command line including access all your data, delete files, etc.  


Instructions for setting up a password on your Jupyter sessions are available on the [https://sig.washington.edu/itsigs/Hyak_IPython#Set_a_password_on_your_notebook Hyak wiki (UW login required)].
4. "Check out" a compute node


Note that you can/should skip the first command that loads the canopy module.
any_machine


== Running Jobs on Hyak ==
5. Start jupyter:


{{notice|This material is now out of date! It refers to the old version of the Hyak scheduler.}}
jupyter-notebook --no-browser --port='''$PORT'''


<div style="width: 300px; float: right; border: 1px solid black; background: #DDD; padding: 0.5em;">
You'll see that jupyter just keeps running in the background. This can be useful because when there are errors, they will sometimes be displayed in this terminal. Generally, you can just ignore this though.
'''Screencast Examples (Sep, 2019):'''


* Using parallel and batch jobs on ikt: [https://communitydata.cc/~mako/hyak_example_day2-20190906.ogv Video]
6. Create a new window in tmux/screen


'''Screencast Examples (Feb, 2018, pre-SLURM):'''
At this point, you have jupyter running on the compute node on $PORT. You also will have forwarded the port from your laptop to the login node. We're really only missing one thing which is the tunnel from the login node to the compute node within hyak. To do this, we'll create a new window inside tmux with the keystroke '''Ctrl b'''.


* Interactive job (ikt): [https://communitydata.cc/~mako/hyak_example_interactive_job-20180215-part_1.ogv Part 1], [https://communitydata.cc/~mako/hyak_example_interactive_job-20180215-part_2.ogv Part 2]
tmux lets you switch between windows (listed in the green bar at the bottom) here are commands:
* Batch Job (ikt): [https://communitydata.cc/~mako/hyak_example_batch_job-20180517.ogv Video]
</div>


When you first log in to Hyak, you will be on a "login node". These are nodes that have access to the Internet, and can be used to update code, move files around, etc. They should not be used for computationally intensive tasks. To actually run jobs, there are a few different options, described in detail [https://sig.washington.edu/itsigs/Hyak_Job_Scheduler in the itSigs documentation]. Following are basic instructions for some common use cases.
* Starts a new tmux: '''tmux''' (at the command line)
* Connect to an existing tmux: '''tmux attach''' (at the command line)
* Create a new window: '''Ctrl-b c''' (from ''within'' tmux)
* Switch to window ''N'': '''Ctrl-b N''' (from ''within'' tmux)
* Disconnect from tmux: '''Ctrl-b d''' (from ''within'' tmux)


=== Interactive nodes ===


For simple tasks, e.g. running R on a dataset, testing that code is working, etc. it is easiest to run it in an interactive node. This is a compute node that you interact with through the terminal. All of your disk storage is accessible just as though you were on the login node.
Connect to a hyak login node. To keep your jupyter notebook running after you disconnect run screen (or tmux).


=== Parallel SQL ===
We are going to forward the connection from the compute node to the login node to your local machine.


For big jobs you will want to use multiple nodes. Hyak has a very cool tool that makes this very easy, called Parallel SQL. Detailed instructions are in [https://sig.washington.edu/itsigs/Hyak_parallel-sql the itsigs parallel-sql documentation]. There is also a [[CommunityData:Hyak walkthrough|full walkthrough example with instructions]].
run jupyter on the compute node. </b>


The basic workflow is:


0. Be empowered to run parallel_sql -- the first time you use parallel_sql, you will need to:
  login$ module load parallel_sql
  login$ sudo pssu --initial
  [sudo] password for USERID: <Enter your UW NetID password>


See more information at: [[https://wiki.cac.washington.edu/display/hyakusers/Hyak+parallel-sql]]. If you're not initialized, it'll say "Cannot read database config file '/usr/lusers/<<your username>>/.parallel/db.conf': No such file or directory' when you try.
<b>Now forward the jupyter server to the login node. Open a new screen. </b>


1. Prepare the code, and test it with a single file (either on your computer, or on an interactive node).


2. Write a job_script file. This tells the node what job to run. There is an example on the Parallel SQL wiki page (linked above), and an example in the wikiresearch/hyak_example directory.
<b>And run this ssh command.</b>


3. Create a task_list file. This is a list of commands that should be run, with one line per file that the command should operate on. An example file might look something like:
nabcd is the name of the compute node. Replace abcd with the node number.


python analysis_script.py -i ./input/wiki_1.tsv -o ./output/wiki_1_analysis.tsv
<code>ssh -N -f -L localhost:$PORT:localhost:$PORT nabcd</code>
python analysis_script.py -i ./input/wiki_2.tsv -o ./output/wiki_2_analysis.tsv
...


The README in the hyak_example directory has some example bash commands that you might use to generate this file.
<b>Now on you local machine (your laptop), forward the port from hyak to localhost.</b>


4. Load the task_list into Parallel SQL.


$ module load parallel_sql
<b>open localhost:PORT in your browser</b>
$ cat task_list | psu --load


5. Run the job_script on as many nodes as you need. When each task is finished, the node will get the next task from Parallel SQL.
It should work!


$ for job in $(seq 1 N); do qsub job_script; done
== Set up a password for Jupyter Notebook on Hyak ==
# N is the number of nodes


You can also use the -t flag, which makes jobs using multiple nodes easier to kill, but is not recommended by "the HYAK people".
Once you have IPython/Jupyter up and running on Hyak and have set up all the port forwarding stuff described above, you might consider adding a password to secure your Jupyter session. Why bother? Anyone with access to Hyak can see that you're forwarding ''something'' via the login node. While unlikely, they may do something to interrupt or otherwise mess with your session. With a password, you can make this much less likely.


$ qsub job_script -t 0-N
Instructions for setting up a password on your Jupyter sessions are available on the [https://sig.washington.edu/itsigs/Hyak_IPython#Set_a_password_on_your_notebook Hyak wiki (UW login required)].
# N is the number of nodes


Note that you can/should skip the first command that loads the canopy module.


For producing your task_list file, you might find it useful to make a python script that slurps up a list of files from a dir and then inserts those filenames into a command file to be run repeatedly:
== Running Jobs on Hyak ==


#!/usr/bin/env python3
When you first log in to Hyak, you will be on a "login node". These are nodes that have access to the Internet, and can be used to update code, move files around, etc. They should not be used for computationally intensive tasks. To actually run jobs, there are a few different options, described in detail [https://sig.washington.edu/itsigs/Hyak_Job_Scheduler in the itSigs documentation]. Following are basic instructions for the two most common use cases.
import glob
outfile = "many_Redir_Runs.txt"
infileDir = "/com/raw_data/complete_wmf_dumps-20180220/enwiki-20180301/"
fileList = glob.glob(infileDir + "enwiki-20180301-pages-meta-history*.7z") #get all the 7z metahistory files
with open(outfile, 'w') as outFileHandle:
    for file in fileList:
        cleanFile = file.split("/")[-1]
        commandString = "7za x -so " + file + "| python ./01-extract_redirects.py > output/redir/" + cleanFile + ".tsv \n"
        outFileHandle.write(commandString)


=== R Markdown ===
=== Interactive nodes ===


[http://rmarkdown.rstudio.com/ R markdown] is a useful way of writing up your analysis as a mix of explanatory text and code. You can, for example, create fancy Tufte-style [https://rstudio.github.io/tufte/ handouts] with code and explanatory text in the [https://raw.githubusercontent.com/rstudio/tufte/master/inst/rmarkdown/templates/tufte_html/skeleton/skeleton.Rmd same file]! In order to use R markdown, in a compute node, run the following command
For simple tasks, e.g. running R on a dataset, testing that code is working, etc. it is easiest to run it in an interactive node. This is a compute node that you interact with through the terminal. All of your disk storage is accessible just as though you were on the login node.


$ Rscript -e "rmarkdown::render('analysis.Rmd')"
=== Parallel SQL ===


=== Python Virtual Environments ===
For big jobs you will want to use multiple nodes. Hyak has a very cool tool that makes this very easy, called Parallel SQL. Detailed instructions are in [https://sig.washington.edu/itsigs/Hyak_parallel-sql the itsigs parallel-sql documentation]. There is also a full walkthrough example with instructions in the <code>wikiresearch/hyak_example</code> directory.


Python virtual environments are a great way to manage project dependencies, and they seem to work on HYAK in the same way that they do on local machines.  First install virtualenv using pip (this only needs to be done once).
The basic workflow is:


$ pip install virtualenv --user
1. Prepare the code, and test it with a single file (either on your computer, or on an interactive node).
2. Write a job_script file. This tells the node what job to run. There is an example on the Parallel SQL wiki page (linked above), and an example in the wikiresearch/hyak_example directory.
3. Create a task_list file. This is a list of commands that should be run, with one line per file that the command should operate on. An example file might look something like:


Initialize a new virtual environment in the current directoryMany people create a new virtual environment for each project.
python analysis_script.py -i ./input/wiki_1.tsv -o ./output/wiki_1_analysis.tsv
  python analysis_script.py -i ./input/wiki_2.tsv -o ./output/wiki_2_analysis.tsv
...


$ # this virtual environment will use python 3
The README in the hyak_example directory has some example bash commands that you might use to generate this file.
$ virtualenv venv -p python3


To activate the virtual environment from a login node or an interactive compute node
4. Load the task_list into Parallel SQL.


  $ source <path_to_venv_parent_dir>/venv/bin/activate
  $ module load parallel_sql
$ cat task_list | psu --load


To load a virtual environment in parallel sql, add the following to your PBS bash script
5. Run the job_script on as many nodes as you need. When each task is finished, the node will get the next task from Parallel SQL.


  source <path_to_venv_parent_dir>/venv/bin/activate
  $ for job in $(seq 1 N); do qsub job_script; done
# N is the number of nodes


=== Killing jobs on compute nodes ===
=== Killing jobs on compute nodes ===
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)

Template used on this page: