This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
keller_and_evans_lab:cu_research_computing [2016/04/07 15:24] lessem /* Only add the paths */ |
keller_and_evans_lab:cu_research_computing [2016/12/12 11:55] matthew_keller /* Getting information on jobs */ |
||
---|---|---|---|
Line 59: | Line 59: | ||
- | You will have to manually create a directory to put their stuff in. You can also just make a big mess with files all over and annoy other users. lustre and rc_scratch are network filesystems, | + | You will have to manually create a directory to put their stuff in. You can also just make a big mess with files all over and annoy other users. lustre and rc_scratch are network filesystems, |
======= Slurm ======= | ======= Slurm ======= | ||
+ | |||
+ | |||
+ | |||
+ | ====== Queues ====== | ||
+ | |||
+ | |||
+ | #if you want to run on ibg himem, you need to load the right module | ||
+ | module load slurm/ | ||
+ | |||
+ | #then in your shell script | ||
+ | #SBATCH --qos=blanca-ibg | ||
+ | |||
+ | #If you want to run on normal queues, then: | ||
+ | module load slurm/slurm | ||
+ | |||
+ | #then in your shell script, one of the below, depending on what queue you want | ||
+ | #SBATCH --qos=himem | ||
+ | #SBATCH --qos=crestone | ||
+ | #SBATCH --qos=janus | ||
+ | |||
+ | |||
Line 68: | Line 89: | ||
- | squeue -u < | + | |
+ | #To check our balance on our allocations and get the account id# | ||
+ | sbank balance statement | ||
+ | sacctmgr -p show user < | ||
+ | |||
+ | #To see how busy the nodes are. For seeing how many janus nodes are available, look for the | ||
+ | #number under NODES where STATE is " | ||
+ | sinfo -l | ||
+ | |||
+ | #checking on submissions for a user | ||
+ | squeue -u < | ||
squeue -u < | squeue -u < | ||
squeue -u < | squeue -u < | ||
+ | squeue -u < | ||
+ | |||
+ | #detailed information on a queue (who is running on it, how many cpus requested, memory requested, time information, | ||
+ | squeue -q blanca-ibg -o %u, | ||
+ | |||
+ | #current status of queues | ||
+ | qstat -i #To see jobs that are currently pending (this is helpful for seeing if queue is overbooked) | ||
+ | qstat -r #To see jobs that are currently running | ||
+ | qstat -a #To see jobs that are running OR are queued | ||
+ | qstat -a -n #To see all jobs, including which nodes they are running on | ||
+ | qstat -r -n #To see running jobs, and which nodes they are running on | ||
+ | |||
+ | #other commands | ||
showq-slurm -o -U -q < | showq-slurm -o -U -q < | ||
- | scontrol show jobid -dd < | + | scontrol show jobid -dd < |
+ | pbsnodes -a #To look at the status of each node | ||
### Once job has completed, you can get additional information | ### Once job has completed, you can get additional information | ||
Line 80: | Line 125: | ||
sacct -u < | sacct -u < | ||
+ | #To check graphically how much storage is being taken up in / | ||
+ | xdiskusage / | ||
- | ====== Controlling jobs ====== | ||
+ | ====== Running and Controlling jobs ====== | ||
+ | |||
+ | |||
+ | sbatch < | ||
+ | sinteractive --nodelist=bnode0102 #run interactive job on node " | ||
scancel < | scancel < | ||
scancel -u < | scancel -u < | ||
Line 113: | Line 164: | ||
This only needs to be done once. | This only needs to be done once. | ||
- | Then launch your interactive job. | + | Then launch your interactive job on the IBG himem node. |
module load slurm/ | module load slurm/ | ||
+ | |||
+ | |||
+ | Or onto any free himem node | ||
+ | |||
+ | module load slurm/ | ||
Line 180: | Line 236: | ||
tabix -h chr${chr}/ | tabix -h chr${chr}/ | ||
+ | |||
+ | |||
+ | |||
+ | ======= Compiling software ======= | ||
+ | |||
+ | RC intentionally keeps some header files off the login nodes to dissuage people from trying to compile on those nodes. Instead, use the janus-compile nodes to compile your software. Log in to a login node and then run | ||
+ | |||
+ | ssh janus-compile[1-4] | ||