Editing CommunityData:Klone
From CommunityData
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
'''Klone''' is the latest version of hyak, the UW super computing system. We will soon have a larger allocation of machines on Klone than on Mox. The Klone machines have 40 cores and either 384GB or 768GB of RAM | '''Klone''' is the latest version of hyak, the UW super computing system. We will soon have a larger allocation of machines on Klone than on Mox. The Klone machines have 40 cores and either 384GB or 768GB of RAM. | ||
== Setting up SSH == | == Setting up SSH == | ||
Line 22: | Line 22: | ||
== | == New Container Setup == | ||
The recommended way to manage software for your research projects on Klone is to use [https:// | We will use multiple different singularity containers for different applications to avoid incidentally breaking existing versions of packages during upgrades. We want to containers that include "soft dependencies" that R or Python libraries might want. | ||
We haven't built the containers yet, but let's start keeping track of dependencies that we'll need. | |||
=== R === | |||
* Node.js | |||
* graphviz | |||
=== Python === | |||
* spark | |||
To make singularity transparent to users, we will use simple bash executables to alias popular commands. The list of commands to alias includes: | |||
* python | |||
* python3 | |||
* R | |||
* Rscript | |||
* jupyter-console | |||
* jupyter-notebook | |||
=== Questions for the group === | |||
What executable do you want in containers? | |||
== To make a new container alias == | |||
For example, let's say you want to make a command to run <code>jupyter-console</code> for interactive python work and let's say you know that you want to run this from the <code>cdsc_python.sif</code> container located in <code>/gscratch/comdata/containers/cdsc_python</code>. | |||
1. Ensure that the software you want to execute is installed in the container. Test this by running <code> singularity exec /gscratch jupyter-console</code>. | |||
2. Create an executable file in /gscratch/comdata/containers/bin. The file should look like: | |||
<syntaxhighlight lang='bash'> | |||
#!/usr/bin/env bash | |||
singularity exec /gscratch/comdata/containers/cdsc_python/cdsc_python.sif jupyter-console. | |||
</syntaxhighlight> | |||
== Setup == | |||
The recommended way to manage software for your research projects on Klone is to use [https://sylabs.io/docs/ Singularity containers]. You can build a singularity container using the linux distribution manager of your choice (i.e., debian, ubuntu, centos). The instructions on this page document how to build the <code>cdsc_base.sif</code> singularity package which provides python, R, julia, and pyspark based on Debian 11 (Bullseye). | |||
Copies of the definition file and a working container are located at <code>/gscratch/comdata/containers/cdsc_base/</code>. | |||
=== Initial .Bashrc === | === Initial .Bashrc === | ||
Before we get started using our | Before we get started using our singularity package on klone, we need to start with a <code>.bashrc</code>. | ||
<syntaxhighlight language='bash'> | <syntaxhighlight language='bash'> | ||
# .bashrc | # .bashrc | ||
# Source global definitions | |||
if [ -f /etc/bashrc ]; then | |||
. /etc/bashrc | |||
if [ -f | |||
fi | fi | ||
source /gscratch/comdata/env/cdsc_klone_bashrc | |||
# User specific environment | # User specific environment | ||
Line 45: | Line 83: | ||
PATH="$HOME/.local/bin:$HOME/bin:$PATH" | PATH="$HOME/.local/bin:$HOME/bin:$PATH" | ||
fi | fi | ||
export PATH | export PATH | ||
# Uncomment the following line if you don't like systemctl's auto-paging feature: | # Uncomment the following line if you don't like systemctl's auto-paging feature: | ||
Line 64: | Line 90: | ||
# User specific aliases and functions | # User specific aliases and functions | ||
umask 007 | umask 007 | ||
if [ -f ~/.bash_aliases ]; then | |||
. ~/.bash_aliases | |||
fi | |||
# export #SINGULARITY_BIND="/gscratch:/gscratch,/mmfs1:/mmfs1,/xcatpost:/xcatpost,/gpfs:/gpfs,/sw:/sw,/usr:/kloneusr,/bin:/klonebinc" | |||
export APPTAINER_BIND="/gscratch:/gscratch,/mmfs1:/mmfs1,/gpfs:/gpfs,/sw:/sw,/usr:/kloneusr,/bin:/klonebin" | export APPTAINER_BIND="/gscratch:/gscratch,/mmfs1:/mmfs1,/gpfs:/gpfs,/sw:/sw,/usr:/kloneusr,/bin:/klonebin" | ||
Line 72: | Line 103: | ||
source "/gscratch/comdata/users/nathante/spark_env.sh" | source "/gscratch/comdata/users/nathante/spark_env.sh" | ||
export _JAVA_OPTIONS="-Xmx362g" | export _JAVA_OPTIONS="-Xmx362g" | ||
export PATH="$PATH:~/.local/bin/" | |||
= | |||
</syntaxhighlight> | </syntaxhighlight> | ||
== Installing | == Installing singularity on your local computer == | ||
You might find it more convenient to develop your | You might find it more convenient to develop your singularity container on your local machine. You'll want singularity version 3.4.2. which is the version installed on klone. Follow [https://sylabs.io/guides/3.5/admin-guide/installation.html these instructions] for installing singularity on your local linux machine. | ||
== Creating a | == Creating a singularity container == | ||
Our goal is to write a | Our goal is to write a singularity definition file that will install the software that we want to work with. The definition file contains instructions for building a more reproducible environment. For example, the file <code>cdsc_base.def</code> contains instructions for installing an environment based on debian 11 (bullseye). Once we have the definition file, we just have to run: | ||
'''NOTE:''' For some reason building a container doesn't work on the <code>/gscratch</code> filesystem. Instead build containers on the <code>/mmfs1</code> filesystem and then copy them to their eventual homes on <code>/gscratch</code>. | '''NOTE:''' For some reason building a container doesn't work on the <code>/gscratch</code> filesystem. Instead build containers on the <code>/mmfs1</code> filesystem and then copy them to their eventual homes on <code>/gscratch</code>. | ||
<syntaxhighlight language='bash'> | <syntaxhighlight language='bash'> | ||
singularity build --fakeroot cdsc_base.sif cdsc_base.def | |||
</syntaxhighlight> | </syntaxhighlight> | ||
On a klone compute node to create the | On a klone compute node to create the singularity container <code>cdsc_base.sif</code>. This can take quite awhile to run as it downloads and installs a lot of software! | ||
You can start a shell in the container using: | You can start a shell in the container using: | ||
<syntaxhighlight language='bash'> | <syntaxhighlight language='bash'> | ||
singularity shell cdsc_base.sif | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 127: | Line 131: | ||
<syntaxhighlight language='bash'> | <syntaxhighlight language='bash'> | ||
singularity exec cdsc_base.sif echo "my command" | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 135: | Line 139: | ||
<syntaxhighlight language='bash'> | <syntaxhighlight language='bash'> | ||
singularity build --sandbox cdsc_base_sandbox cdsc_base.sif | |||
</syntaxhighlight> | </syntaxhighlight> | ||
You might run into trouble with exceeding space in your temporary file path. If you do, run | You might run into trouble with exceeding space in your temporary file path. If you do, run | ||
<syntaxhighlight language='bash'> | <syntaxhighlight language='bash'> | ||
sudo export | sudo export SINGULARITY_TMPDIR=/my/large/tmp | ||
sudo export | sudo export SINGULARITY_CACHEDIR=/my/large/apt_cache | ||
sudo export | sudo export SINGULARITY_LOCALCACHEDIR=/my/large/apt_cache | ||
</syntaxhighlight> | </syntaxhighlight> | ||
before running the build. | before running the build. | ||
Line 160: | Line 164: | ||
== Spark == | == Spark == | ||
To set up a spark cluster using | To set up a spark cluster using singularity the first step to "run" the container on each node in the cluster: | ||
<syntaxhighlight lang='bash'> | <syntaxhighlight lang='bash'> | ||
# on the first node | # on the first node | ||
singularity instance start --fakeroot cdsc_base.sif spark-boss | |||
export SPARK_BOSS=$(hostname) | export SPARK_BOSS=$(hostname) | ||
# on the first worker node (typically same as boss node) | # on the first worker node (typically same as boss node) | ||
singularity instance start --fakeroot cdsc_base.sif spark-worker-1 | |||
# second worker node | # second worker node | ||
singularity instance start --fakeroot cdsc_base.sif spark-worker-2 | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 175: | Line 179: | ||
<syntaxhighlight lang='bash'> | <syntaxhighlight lang='bash'> | ||
singularity exec instance://spark-boss /opt/spark/sbin/start_master.sh | |||
singularity exec instance://spark-worker-1 /opt/spark/sbin/start-worker.sh $SPARK_BOSS:7077 | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 186: | Line 190: | ||
<syntaxhighlight lang='bash'> | <syntaxhighlight lang='bash'> | ||
# replace n3078 with the master hostname | # replace n3078 with the master hostname | ||
singularity exec instance://spark-boss /opt/spark/bin/spark --master spark://n3078.hyak.local:7077 | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Nate's working on wrapping the above nonsense in friendlier scripts. | Nate's working on wrapping the above nonsense in friendlier scripts. |