Editing CommunityData:Hyak
From CommunityData
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 202: | Line 202: | ||
=== My R Job is getting Killed === | === My R Job is getting Killed === | ||
First, make sure you're running on a compute node (like n2344) or else the int_machine | First, make sure you're running on a compute node (like n2344) or else the int_machine. | ||
Second, see if you can narrow down where in your R code the problem is happening. Kaylea has seen it primarily when reading or writing files, and this tip is from that experience. Breaking the read or write into smaller chunks (if that makes sense for your project) might be all it takes. | Second, see if you can narrow down where in your R code the problem is happening. Kaylea has seen it primarily when reading or writing files, and this tip is from that experience. Breaking the read or write into smaller chunks (if that makes sense for your project) might be all it takes. | ||
Third, if you're getting something like a slurm_stepd time limit violation, try adjusting the time-min setting you use with your job. HPC jobs are typically broken into steps, and time-min is how long the scheduler should leave your job alone before insisting on some kind of progress report from the job steps. Unfortunately, it seems like when slurm talks to bash, sometimes the answer from bash is something like, 'golly, I think I'm wedged, please kill my process' -- because if you're silently loading a bunch of files or saving something out, and maybe R isn't giving bash a whole lot of feedback---so it looks frozen when it isn't. The scheduler will be more than happy to unceremoniously kill your job in that case. The solution Kaylea found is to specify a more forgiving --time-min. The default in the any_machine alias is 00:15:00, but try 00:30:00 or 00:45:00. |