Not logged in
Talk
Contributions
Create account
Log in
Navigation
Main page
About
People
Publications
Teaching
Resources
Research Blog
Wiki Functions
Recent changes
Help
Licensing
Project page
Discussion
Edit
View history
Editing
CommunityData:Hyak walkthrough
(section)
From CommunityData
Jump to:
navigation
,
search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Project-specific steps (done for each project) == # Create a new project in your batch_jobs directory #: <code> mkdir ~/batch_jobs/wikiq_test </code> #: <code> cd ~/batch_jobs/wikiq_test </code> # Create a symlink to the data that you will be using as an input (in this case, the 2010 wikia dump) #: <code> ln -s /com/raw_data/wikia_dumps/2010-04-mako ./input </code> # Create an output directory #: <code> mkdir ./output </code> # Test to make sure everything is working well, and everything is where it should be, run wikiq on one file #: <code> python3 /com/local/bin/wikiq ./input/012thfurryarmybrigade.xml.7z -o ./output </code> #* This should provide some output in the terminal, and should create a file at ~/batch_jobs/wikiq_test/output/012thfurryarmybrigade.tsv. You should examine this file to make sure it looks as expected # When you're done, remove it #: <code> rm ./output/* </code> # Now we'll use that command as a template for creating a task_list. This is a file with a line for each command we would like our job to run. In this case, we'll use the terminal to find a list of all of the wiki files, which we will pipe to xargs. xargs takes each file name, and uses echo to insert it into the command. Each line is then written to the task_list file. #: <code> find ./input/ -mindepth 1 | xargs -I {} echo "python3 /com/local/bin/wikiq {} -o ./output" > task_list </code> #* This will create a file named task_list. * Note: this will take a while - approx. 1 minute. # Make sure it is as large as expected (it should have 76471 lines) #: <code> wc -l task_list </code> #* You can also visually inspect it, to make sure that it looks like it should. #: <code> less task_list </code> # Copy [[CommunityData:Hyak example job script|this job_script]] to your wikiq_test directory #: <code> vi ~/batch_jobs/wikiq_test/job_script </code> # Edit the job_script. https://sig.washington.edu/itsigs/Hyak_parallel-sql has a good example script, with explanations for what each piece does. For our project, you should just change where it says USERNAME to your user name #* You can do this with vim, or you can just run the following: #: <code> sed -i -e 's/USERNAME/<Your User Name>/' job_script </code> #* The other part of this file that you will often have to change is the walltime. This is how long you want to have the node assigned to your job. For long jobs, you will need to increase this parameter. # Load up 100 tasks into Parallel SQL, as a test. You want to make sure that everything is working end-to-end before trying it on the whole set of files. #: <code> module load parallel_sql </code> #: <code> cat task_list | head -n 100 | psu --load </code> #* Check to make sure that they loaded correctly (they should show up as 100 available jobs) #: <code> psu --stats </code> # Check to make sure there are available nodes #: <code> showq -w group=hyak-mako </code> #* We have 8 nodes currently, so subtract the number of active jobs from 8, and that is the number of available nodes. # Run the jobs on the available nodes. #: <code> for job in $(seq 1 N); do qsub job_script; done Replace "N" with the number of available nodes </code> # Make sure things are working correctly #: <code> watch showq -w group=hyak-mako </code> #* This lets you watch to make sure that your jobs are assigned to nodes correctly. Once they are assigned, Ctrl+c gets you out of watch, and you can watch the task list in Parallel SQL #: <code> watch psu --stats </code> #* This lets you watch the task list. You should see the tasks move from available to completed. When they are all completed, ru #: <code> ls ./output | wc -l </code> #* This checks to make sure all 100 files were written to the output folder. You probably want to also look at a few files, to make sure they look as expected. #* If everything looks good, then remove the output files #: <code> rm ./output/* </code> #* and clean up the parallel SQL DB #: <code> psu --del</code> # Finally, run the jobs over the full set of files #: <code> cat task_list | psu --load </code> #: <code> psu --stats # Should show all 76471 tasks </code> #: <code> showq -w group=hyak-mako # Find out how many nodes are available </code> #: <code> for job in $(seq 1 N); do qsub job_script; done # Replace N with the nodes available </code> #* Keep an eye on the tasks with #: <code> watch showq -w group=hyak-mako </code> <br/>and #: <code> watch psu --stats </code>
Summary:
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see
CommunityData:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)
Tools
What links here
Related changes
Special pages
Page information