Editing CommunityData:Hyak

From CommunityData

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
[https://hyak.uw.edu/ Hyak] is the University of Washington's high performance computing (HPC) system. The CDSC has purchased a number of "nodes" on this system, which you will have access to as a member of the group.
To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token which you will need as part of [[CommunityData:Hyak setup|getting setup]]. The following links will be useful.


To use Hyak, you must first have a UW NetID, access to Hyak, and a two factor authentication token which you will need as part of [[CommunityData:Hyak setup|getting setup]]. The following links will be useful.
* [[CommunityData:Hyak setup]]
* [[CommunityData:Klone]] [[CommunityData:Klone Quick Reference]] (for the new hyak nodes).
* [[CommunityData:Hyak Mox migration]]
* [[CommunityData:Hyak setup]] [[CommunityData:Hyak Quick Reference]]
* [[CommunityData:Hyak software installation]]
* [[CommunityData:Hyak software installation]]
* [[CommunityData:Hyak Spark]]
* [[CommunityData:Hyak Mox migration]]
* [[CommunityData:Hyak Ikt (Deprecreated)]]
* [[CommunityData:Hyak Datasets]]


There are a number of other sources of documentation beyond this wiki:
There are a number of other sources of documentation:


* [http://students.washington.edu/hpcc/using-hyak/information-for-beginner-users/slides-from-training-sessions/ Slides from the UW HPC Club]
* [http://wiki.hyak.uw.edu Hyak User Documentation]
* [http://wiki.hyak.uw.edu Hyak User Documentation]


== General Introduction to Hyak ==
The UW Research Computing Club has put together [https://depts.washington.edu/uwrcc/getting-started-2/hyak-training/ this excellent 90 minute training video] that introduces Hyak. It's probably a good place to start for anybody trying to get up-and-running on Hyak.


== Setting up SSH ==  
== Setting up SSH ==  
Line 39: Line 32:


=== X11 forwarding ===
=== X11 forwarding ===
{{notice|This is likely only applicable if you are a Linux user}}


You may also want to add these two lines to your Hyak <code>.ssh/config</code> (indented under the line starting with "Host"):
You may also want to add these two lines to your Hyak <code>.ssh/config</code> (indented under the line starting with "Host"):
Line 57: Line 48:
It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a [https://itconnect.uw.edu/security/uw-netids/2fa/ 2-factor authentication request].
It will prompt you for your UWNetID's password. Once you type in your password, you will have to respond to a [https://itconnect.uw.edu/security/uw-netids/2fa/ 2-factor authentication request].


== Setting up your Hyak environment ==
== Setting Up Hyak environment ==


Everybody who uses Hyak as part of our group '''must''' add the following line to their <code>~/.bashrc</code> file on Hyak:
Everybody who uses Hyak as part of our group '''must''' add the following line to their <code>~/.bashrc</code> file on Hyak:
Line 65: Line 56:
</source>
</source>


If you don't have a preferred terminal-style text editor, you might start with nano -- <code>nano ~/.bashrc</code>, arrow down, paste in the 'source....' text from above, then ^O to save and ^X to exit. You'll know you were successful when you type <code>more ~/.bashrc</code> and see the 'source....' line at the bottom of the file. Copious information about use of a terminal-style text editor is available online -- common options include nano (basic), emacs (tons of features), and vim (fast).
This line will load scripts that will initialize a good data science environment and set the [[:wikipedia:umask|umask]] so that the files and directors you create are readable by others in the group. '''Please do this immediately before you do any other work on Hyak.''' When you are done, you can reload the shell by logging out and back into Hyak or by running <code lang="bash">exec bash</code>.
 
This line will load scripts that will initialize a good data science environment and set the [[:wikipedia:umask|umask]] so that the files and directories you create are readable by others in the group. '''Please do this immediately before you do any other work on Hyak.''' When you are done, you can reload the shell by logging out and back into Hyak or by running <code lang="bash">exec bash</code>.


== Using the CDSC Hyak Environment ==
== Using the CDSC Hyak Environment ==
Line 73: Line 62:


By default you have access to a home directory with a relatively small quota. There are several dozen terabytes of CDSC-allocated storage in <code>/gscratch/comdata/</code> and you should explore that space. Typically we download  
By default you have access to a home directory with a relatively small quota. There are several dozen terabytes of CDSC-allocated storage in <code>/gscratch/comdata/</code> and you should explore that space. Typically we download  
large datasets to <code>/gscratch/comdata/raw_data</code> (see [[#New datasets|the section on new datasets]] below), processed data in <code>/gscratch/comdata/output</code>, and personal workspaces with the need for large data storage in <code>/gscratch/comdata/users/'''<YOURNETID>'''</code>.
large datasets to <code>/gscratch/comdata/raw_data</code> (see [[#Downloading new datasets|downloading new datasets]] below), processed data in <code>/gscratch/comdata/output</code>, and personal workspaces with the need for large data storage in <code>/gscratch/comdata/users/'''<YOURNETID>'''</code>.


=== Basic Commands ===
=== Basic Commands ===
Line 98: Line 87:
Displays jobs by members of the group.  
Displays jobs by members of the group.  


Read the files in <code>/gscratch/comdata/env</code> to see how these commands are created (or run <code>which</code>) as well as other features not documented here.
Read the files in <code>/gscratch/comdata/env</code> to see how these commands are created as well as other features not documented here.  


=== Anaconda ===  
=== Anaconda ===  
Line 104: Line 93:
We recently switched to using Anaconda to manage Python on Hyak.  Anaconda comes with the `conda` tool for managing python packages and versions. Multiple python environments can co-exist in a single Anaconda installation, this allows  different projects to use different versions of Python or python packages, which can be useful for maintaining projects that use old versions.  
We recently switched to using Anaconda to manage Python on Hyak.  Anaconda comes with the `conda` tool for managing python packages and versions. Multiple python environments can co-exist in a single Anaconda installation, this allows  different projects to use different versions of Python or python packages, which can be useful for maintaining projects that use old versions.  


By default, our shared setup loads a conda environment called `minimal_ds` that provides recent versions of python packages commonly used in data science workflows.  This is probably a good setup for most use-cases, and allows everyone to use the same packages, but it can be even better to create different environments for each project.  See the [https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands anaconda documentation for how to create an environment].
By default, our shared setup loads a conda environment called `minimal_ds` that provides recent versions of python packages commonly used in data science workflows.  This is probably a good setup for most use-cases, and allows everyone to use the same packages, but it can be even better to create different environments for each project.  See the [https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands anaconda documentation for how to create an environment].  
 
To learn how to install Python packages, see the [[CommunityData:Hyak software installation#Python packages|Python packages installation instructions]] on this wiki.


=== SSH into compute nodes ===  
=== SSH into compute nodes ===  
The [https://wiki.cac.washington.edu/display/hyakusers/Hyak_ssh hyak wiki] has instructions for how to enable ssh within hyak. Reproduced below:
The [https://wiki.cac.washington.edu/display/hyakusers/Hyak_ssh hyak wiki] has instructions for how to enable ssh within hyak. Reproduced below:


<blockquote>
You should be able to ssh from the login node to a compute node without giving a password. If it does not work then do below steps:
You should be able to ssh from the login node to a compute node without giving a password. If it does not work then do below steps:


# <code>ssh-keygen</code> then press enter for each question. This will ensure default options.
1) ssh-keygen
# <code>cd ~/.ssh</code>
 
# <code>cat id_rsa.pub >> authorized_keys</code>
Press enter for each question. This will ensure default options.
 
2) cd .ssh
 
3) cat id_rsa.pub >> authorized_keys
</blockquote>


== Running Jobs on Hyak ==  
== Running Jobs on Hyak ==  
Line 131: Line 124:
The Slurm scheduler provides a command called [https://slurm.schedmd.com/scancel.html scancel] to terminate jobs. For example, you might run <tt>queue_state</tt> from a login node to figure out the ID number for your job (let's say it's 12345), then run <tt>scancel --signal=TERM 12345</tt> to send a SIGTERM signal or <tt>scancel --signal=KILL 12345</tt> to send a SIGKILL signal that will bring job 12345 to an end.
The Slurm scheduler provides a command called [https://slurm.schedmd.com/scancel.html scancel] to terminate jobs. For example, you might run <tt>queue_state</tt> from a login node to figure out the ID number for your job (let's say it's 12345), then run <tt>scancel --signal=TERM 12345</tt> to send a SIGTERM signal or <tt>scancel --signal=KILL 12345</tt> to send a SIGKILL signal that will bring job 12345 to an end.


=== Parallelization Tips ===
=== Parallel R ===


The nodes on Mox have 28 CPU cores. Our nodes on Klone have 40.  These may help in speeding up your analysis ''significantly''. If you are using R functions such as <code>lapply</code>, there are parallelized equivalents (e.g. <code>mclappy</code>) which can take advantage of all the cores and give you a 2800% or (4000)% boost! However, something to be aware of here is your code's memory requirement—if you are running 28 processes in parallel, your memory needs can also go up to 28x, which may be more than the ~200GB that the <code>big_machine</code> node on mox will have. In such cases, you may want to dial down the number of CPU cores being used—a way to do that globally in your code is to run the following snippet of code before calling any of the parallelized functions.
The nodes on Hyak have 28 CPU cores. These may help in speeding up your analysis ''significantly''. If you are using R functions such as <code>lapply</code>, there are parallelized equivalents (e.g. <code>mclappy</code>) which can take advantage of all the cores and give you a 2800% boost! However, something to be aware of here is your code's memory requirement—if you are running 28 processes in parallel, your memory needs can also go up to 28x, which may be more than the ~200GB that the <code>big_machine</code> node will have. In such cases, you may want to dial down the number of CPU cores being used—a way to do that globally in your code is to run the following snippet of code before calling any of the parallelized functions.
 
If you find yourself doing this often, consider if it is possible to reduce your memory usage via streaming, databases (like sqlite; parquet files; or duckdb), or lower-precision data types (i.e., use 32bit or even 16bit floating point numbers instead of the standard 64bit).  


<source lang="r">
<source lang="r">
Line 144: Line 135:


More information on parallelizing your R code can be found in the [https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf <code>parallel</code> package documentation].
More information on parallelizing your R code can be found in the [https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf <code>parallel</code> package documentation].
<!-- The hyak machines have 16 cpu cores.  The Mox machines will have 28! Running your program on all the cores can speed things up a lot! We make heavy use of R for building datasets and for fitting models. Like most programming languages, R uses only one cpu by default. However, for typical computation-heavy data science tasks it is pretty easy to make R use all the cores.
For fitting models, the R installed in Gentoo should use all cores automatically. This is thanks to OpenBlas, which is a numerical library that implements and parallelizes linear algebra routines like matrix factorization, matrix inversion, and other operations that bottleneck model fitting.
However, for building datasets, you need to do a little extra work. One common strategy is to break up the data into independent chunks (for example, when building wikia datasets there is one input file for each wiki) and then use <code>mcapply</code> from <code>library(parallel)</code> to build variables from each chunk. Here is an example:
    library(parallel)
    options(mc.cores=detectCores())  ## tell R to use all the cores
   
    mcaffinity(1:detectCores()) ## required and explained below
 
    library(data.table) ## for rbindlist, which concatenates a list of data.tables into a single data.table
   
    ## imagine defining a list of wikis to analyze
    ## and a function to build variables for each wiki
    source("wikilist_and_buildvars")
   
    dataset <- rbindlist(mclapply(wikilist,buildvars))
   
    mcaffinity(rep(1,detectCores())) ## return processor affinities to the status preferred by OpenBlas
A working example can be found in the [[Message Walls]] git repository.
<code>mcaffinity(1:detectCores())</code> is required for the gentoo R <code>library(parallel)</code> to use multiple cores. The reason is technical and has to do with OpenBlas. Essentially, OpenBlas changes settings that govern how R assigns processes to cores. OpenBlas wants all processes assigned to the same core, so that the other cores do not interfere with it's fancy multicore linear algebra. However, when building datasets, the linear algebra is not typically the bottleneck. The bottleneck is instead operations like sorting and merging that OpenBlas does not parallelize.
The important thing to know is that if you want to use mclapply, you need to do <code>mcaffinity(1:detectCores())</code>. If you want to then fit models you should do <code>mcaffinity(rep(1,detectCores())</code> so that OpenBlas can do its magic. -->


=== Using the Checkpoint Queue ===
=== Using the Checkpoint Queue ===
Line 149: Line 167:
Hyak has a special way of scheduling jobs using the '''checkpoint queue'''.  When you run jobs on the checkpoint queue, they run on someone else's hyak node that they aren't using right now.  This is awesome as it gives us a huge amount of free (as in beer) computing.  But using the checkpoint queue does take some effort, mainly because your jobs can get killed at any time if the owner of the node checks it out.  So if you want to run a job for more than a few minutes on the checkpoint queue it will need to be able to "checkpoint" by saving it's state periodically and then restarting.  
Hyak has a special way of scheduling jobs using the '''checkpoint queue'''.  When you run jobs on the checkpoint queue, they run on someone else's hyak node that they aren't using right now.  This is awesome as it gives us a huge amount of free (as in beer) computing.  But using the checkpoint queue does take some effort, mainly because your jobs can get killed at any time if the owner of the node checks it out.  So if you want to run a job for more than a few minutes on the checkpoint queue it will need to be able to "checkpoint" by saving it's state periodically and then restarting.  


This would be a pain to do manually, fortunately, we have  <code>[http://dmtcp.sourceforge.net/FAQ.html dmtcp] </code>  which can automatically checkpoint and resume most programs.
Nate's working got dmtcp working for arbitrary scripts, and also with wikiq using parallel_sql.
dmtcp 3.0 is installed on Mox.
This will make more sense if you know that dmtcp works by starting a '''coordinator''' process which is responsible for pausing and saving the checkpointed process.  A [https://hpcc.usc.edu/support/documentation/checkpointing/ tutorial on dmtcp with slurm from USC] has a bash function for starting the coordinator called <code>start_dmtcp_coordinator</code>. Nate added this function to the shared .bashrc. So it should be available in your environment on Mox.
   
   
==== Starting a checkpoint queue job ====
==== Starting a checkpoint queue job ====
Line 159: Line 185:
     #SBATCH --partition=ckpt
     #SBATCH --partition=ckpt


== New Datasets ==
You'll might have other stuff in your SBATCH script to request a certain number of cores or memory. Those will matter when we run <code>wikiq</code> below, but here they can be whatever they would be if you were running an <code>sbatch</code> job on one of our machines.  The next thing you need to do specifically for a <code>ckpt</code> job is to run <code>start_coordinator</code>.  This function takes care of making sure that we start a coordinator using the right set of ports and temporary files. We still need to pass in the '''interval''' that we want checkpoints. The bigger this interval the faster your job will run but the more work will be lost when it's interrupted.
 
    start_dmtcp_coordinator -i 600  #checkpoint every 10 minutes
 
Next you need to run your job in a special way so that it is managed by <code>dmtcp</code> and restarted if it gets interrupted. 
 
    # The restart script is created by dmtcp_launch after initialization
    if [ -x dmtcp_restart_script.sh ]; then
        bash dmtcp_restart_script.sh
    else
        # On first pass, run program under DMTCP
        dmtcp_launch --rm $your_script.sh # must run interpreter for scripts
    fi
 
This works because <code>dmtcp_restart_script.sh</code> is created when you launch your job using <code>dmtcp_launch</code>. If that script exists your job should run it instead of your job.
 
There are options that you can pass to <code>dmtcp_launch</code> that can be important.  In particular <code>--checkpoint-open-files</code> and <code> --allow-file-overwrite </code> modify how IO is checkpointed.
 
==== Running wikiq with dmtcp and parallel_sql ====
 
To run wikiq with parallelsql the following need to be arranged:
 
# A shell script for each dumpfile that makes a workspace for <code>dmtcp</code> to keep it's data and restart script.
# These shell scripts loaded in <code>parallel sql</code>.
# A <code>sbatch</code> script that gets a checkpoint node and starts running jobs from <code>parallel_sql</code>.
# You need to restart jobs that get interrupted using parallel sql.
 
Nate made a python script that generates the scripts and makes a file with all the scripts. Notice that each dumpfile gets a script, it's own checkpoint directory, and a line in <code>wikiq_parallel_jobs.sh</code>


If you want to download a new dataset to Hyak you should first check to ensure that is enough space on the current allocation (e.g., with <code>cat /gscratch/comdata/usage_report.txt</code>. If there is not enough space in our allocation, contact [[Mako]] about getting our allocation increased. It should be fast and easy.
<syntaxhighlight lang='python'>
#!/usr/bin/env python3
from os import path
import os
import stat
import glob


If there is enough space, you should download data to <code>/gscratch/comdata/raw_data/YOURNEWDATASET</code>.
archives = glob.glob("/gscratch/comdata/raw_data/wikia_dumps/2010-04-mako/*.xml.7z")


Once you have finished downloading, you should set all the files you have downloaded as read only to prevent people from accidently creating new files, overwriting data, etc. You can do that with the following commands:
scripts_dir = '/gscratch/comdata/users/nathante/wikiq_parallel_scripts'
output_dir =  '/gscratch/comdata/users/nathante/wikiq_output'
checkpoint_dir = '/gscratch/comdata/users/nathante/wikiq_checkpoint'


<syntaxhighlight lang='bash'>
if not path.isdir(scripts_dir):
$ cd /gscratch/comdata/raw_data/YOURNEWDATASET
    os.mkdir(scripts_dir)
$ find . -not -type d -print0 |xargs -0 chmod 440
 
$ find . -type d -print0 |xargs -0 chmod 2550
if not path.isdir(output_dir):
</syntaxhighlight>
    os.mkdir(output_dir)
 
script ="""#!/bin/bash
mkdir -p {0}
cd {0}
start_dmtcp_coordinator -i 60  #checkpoint every 20 minutes
 
if [ -x dmtcp_restart_script.sh ]; then
    bash dmtcp_restart_script.sh
else
    # On first pass, run program under DMTCP
    dmtcp_launch --rm {1}
fi
"""


= Tips and Faqs =
with open("wikiq_parallel_jobs.sh",'w') as calls:
    for dumpfile in archives:
        wikiq_base_call = f"wikiq -u -o {output_dir} {dumpfile}"
        wikiq_call = wikiq_base_call
        wiki = path.split(dumpfile)[1]
        wikiq_script = script.format( path.join(checkpoint_dir,wiki), wikiq_call)


== 5 productivity tips ==
        script_file = path.join(scripts_dir, wiki + '.sh')
        with open(script_file,'w') as of:
            of.write(wikiq_script)
       
        os.chmod(script_file,os.stat(script_file).st_mode | stat.S_IEXEC)


# Find a workflow that works for you. There isn't a standardized workflow for quantitative / computational social science or social computing. People normally develop idiosyncratic workflows around the distinctive tools they know or have been exposed and that meet their diverse needs and tastes. Be aware of how you're spending your time and effort and adopt tools in your workflow that make things easier or more efficient. For example, if you're spending a lot of time typing into the hyak command line, bash-completion and bash-history can help, and a pipeline (see below) might help even more.
        calls.write(script_file)
# If you find yourself spending time manually rerunning code in a multistage project, learn [https://en.wikipedia.org/wiki/Make_(software) Make] or another pipeline tool.  Such tools take some effort but really help you organize, test, and refine your project.  Make is a good choice because it is old and incredibly polished and featureful. You don't need to learn every feature, just the basics. Its interface has a different flavor than more recently designed tools which can be a downside.  Other positives are that it is language agnostic and can run shell commands.
        calls.write('\n')
# [https://slurm.schedmd.com/documentation.html Slurm] the system that you use to access hyak nodes, is also a very powerful system.  The hyak team used to maintain a tool called parallel-sql which helped with running a large number of short-running programs. This tool is no longer supported, but [https://slurm.schedmd.com/job_array.html job arrays] are slurm feature that is even better.
</syntaxhighlight>
# Use the free resources.  Job arrays (mentioned above) are great in combination with the [https://wiki.cac.washington.edu/display/hyakusers/Mox_checkpoint checkpoint queue]. The checkpoint (or ckpt) queue runs your jobs on other people's idle nodes.  You can access thousands of cores and terabytes of RAM on the checkpoint queue.  There are limitations. If the owner of a node wants to use it, they will cancel your job.  If this happens, the scheduler will automatically restart it, and it has a maximum total running time (restarts don't reset the clock). Therefore, it is best suited for jobs that can be paused (saved) and restarted.  If you can design a script to catch the checkpoint signal, save progress, and restart you will be able to make excellent use of the checkpoint queue. Note that checkpoint jobs get run according to a priority system and if members of our group overuse this resource then our jobs will have lower priority. <br /> There is also virtually [https://hyak.uw.edu/docs/storage/gscratch/ unlimited free storage] on hyak under <code>/gscratch/scrubbed/comdata</code> with the catch that the storage is much slower and that files will be automatically deleted after a short time (currently 21 days).
# Get connected to the hyak team and other hyak users.  Hyak isn't perfect and has many recent issues related to the new Klone system. If you run into trouble and it feels like the system isn't working you should email help@uw.edu with a subject line that starts with "hyak:". They are nice and helpful.  Other good resources are the [https://mailman12.u.washington.edu/mailman/listinfo/hyak-users mailing list] and if you are a UW student, the [https://depts.washington.edu/uwrcc/getting-started-2/getting-started/ research computing club].  The club has its own nodes, including GPU nodes that only students who join the club can use.


== Common Troubles and How to Solve Them ==
We also need an sbatch script as <code>parallel_sql_job.sh</code>.
=== Help! I'm over CPU quota and Hyak is angry! ===
<syntaxhighlight lang='bash'>
#!/bin/bash
## parallel_sql_job.sh
#SBATCH --job-name=wikiq_dmtcp
## Allocation Definition
#SBATCH --account=comdata-ckpt
#SBATCH --partition=ckpt
## Resources
## Nodes. This should always be 1 for parallel-sql.
#SBATCH --nodes=1   
## Walltime (12 hours)
#SBATCH --time=12:00:00
## Memory per node
#SBATCH --mem=100G


'''Don't panic.''' Everyone has done this at least once. Mako has done it dozens of times. It is a little bit difficult to deal with but can be solved. You are not in trouble.
module load parallel_sql


The usual reason for this to happen is because you've accidentally run something on a login node that ought to be run on a compute node. The solution is to find the badly behaved process and then use kill to kill the process.
#Put here commands to load other modules (e.g. matlab etc.)
#Below command means that parallel_sql will get tasks from the database
#and run them on the node (in parallel). So a 16 core node will have
#16 tasks running at one time.
parallel-sql --sql -a parallel --exit-on-term
</syntaxhighlight>


If it's a script or command on your commandline, '''Ctrl-c''' to kill it. If you backgrounded it, type <code>fg</code> to foreground it and then '''Ctrl-c'''. But if you ran parallel, you'll need to kill parallel itself.
Next load the scripts into <code>parallel_sql</code>


<code>ps -faux | grep <your username></code> will show you all the things you are running (or have someone else run it for you if the spam is so terrible you can't get a command to run). The first column has the usernames, the second column has the process IDs, the last column has the things you're running.
  module load parallel_sql
  cat wikiq_parallel_jobs.sh | psu --load


[[File:faux.jpg]]
We can now fire up a whole bunch of checkpoint nodes. The limit is technically 2000!  But let's just ask for 10 nodes :)


In the screenshot, the red is the user name being grepped for. At the end of the line the last three entries are the time (in hyak time, type date if you want to compare hyak time to your time), then how much CPU time something has consumed, then a little diagram of parent and child processes. You want parallel (in the example, 9977).  
  for job in $(seq 1 10); do sbatch parallel_sql_job.sh; done


Killing the child process (in the example, 9992) won't likely help because parallel will just go on to the next task you queued up for it. You will need to run something like: <code>kill <process id></code>
If our jobs get interrupted we'll need to run <code> psu --reset-slurm </code> to set them back into '''avail''' state. We can run a little script running on a login node to do this automatically every minute or so.


=== My R Job is getting Killed ===
<syntaxhighlight lang='python'>
#!/usr/bin/env python3
## auto_reset_psu.py
import time
import subprocess


First, make sure you're running on a compute node (like n2344) or else the int_machine and don't use a --time-min flag -- there seems to be a bug with --time-min where it evicts jobs incorrectly.  
running = subprocess.run(["psu", "--show-running"],  universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(running)
while hasattr(running, 'stdout') and len(running.stdout) > 0:
    subprocess.run(["psu","--reset-slurm"])
    time.sleep(60)
    running = subprocess.run(["psu", "--show-running"],  stdout=subprocess.PIPE)
</syntaxhighlight>


Second, see if you can narrow down where in your R code the problem is happening. Kaylea has seen it primarily when reading or writing files, and this tip is from that experience. Breaking the read or write into smaller chunks (if that makes sense for your project) might be all it takes.
That's it! Unleash the power of the checkpoint queue!  Reach out to Nate if you try this and have problems or if you have any questions!
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)

Templates used on this page: