This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
keller_and_evans_lab:cu_research_computing [2016/04/29 07:38] scott |
keller_and_evans_lab:cu_research_computing [2016/12/16 16:40] matthew_keller /* Logging in */ |
||
---|---|---|---|
Line 25: | Line 25: | ||
These settings should work from Mac and Linux. I'm not sure how to do the equivalent from Windows with Putty. On a Mac, those settings will cause X11 to start. If you don't want that to happen, then remove the '' | These settings should work from Mac and Linux. I'm not sure how to do the equivalent from Windows with Putty. On a Mac, those settings will cause X11 to start. If you don't want that to happen, then remove the '' | ||
+ | |||
+ | For those with access to summit, here are the steps to using it: | ||
+ | | ||
+ | #From a login node: | ||
+ | ssh -YC mmkeller@shas0711 | ||
+ | #In your shell script: | ||
+ | No need to include -A UCB00000442 | ||
+ | | ||
+ | | ||
+ | #To run R: | ||
+ | ml load R | ||
+ | ml load gcc | ||
+ | R | ||
+ | |||
Line 63: | Line 77: | ||
======= Slurm ======= | ======= Slurm ======= | ||
+ | |||
+ | |||
+ | |||
+ | ====== Queues ====== | ||
+ | |||
+ | |||
+ | #if you want to run on ibg himem, you need to load the right module | ||
+ | module load slurm/ | ||
+ | |||
+ | #then in your shell script | ||
+ | #SBATCH --qos=blanca-ibg | ||
+ | |||
+ | #If you want to run on normal queues, then: | ||
+ | module load slurm/slurm | ||
+ | |||
+ | #then in your shell script, one of the below, depending on what queue you want | ||
+ | #SBATCH --qos=himem | ||
+ | #SBATCH --qos=crestone | ||
+ | #SBATCH --qos=janus | ||
+ | |||
+ | |||
Line 68: | Line 103: | ||
- | squeue -u < | + | |
+ | #To check our balance on our allocations and get the account id# | ||
+ | sbank balance statement | ||
+ | sacctmgr -p show user < | ||
+ | |||
+ | #To see how busy the nodes are. For seeing how many janus nodes are available, look for the | ||
+ | #number under NODES where STATE is " | ||
+ | sinfo -l | ||
+ | |||
+ | #checking on submissions for a user | ||
+ | squeue -u < | ||
squeue -u < | squeue -u < | ||
squeue -u < | squeue -u < | ||
+ | squeue -u < | ||
+ | |||
+ | #detailed information on a queue (who is running on it, how many cpus requested, memory requested, time information, | ||
+ | squeue -q blanca-ibg -o %u, | ||
+ | |||
+ | #current status of queues | ||
+ | qstat -i #To see jobs that are currently pending (this is helpful for seeing if queue is overbooked) | ||
+ | qstat -r #To see jobs that are currently running | ||
+ | qstat -a #To see jobs that are running OR are queued | ||
+ | qstat -a -n #To see all jobs, including which nodes they are running on | ||
+ | qstat -r -n #To see running jobs, and which nodes they are running on | ||
+ | |||
+ | #other commands | ||
showq-slurm -o -U -q < | showq-slurm -o -U -q < | ||
- | scontrol show jobid -dd < | + | scontrol show jobid -dd < |
+ | pbsnodes -a #To look at the status of each node | ||
### Once job has completed, you can get additional information | ### Once job has completed, you can get additional information | ||
Line 80: | Line 139: | ||
sacct -u < | sacct -u < | ||
+ | #To check graphically how much storage is being taken up in / | ||
+ | xdiskusage / | ||
- | ====== Controlling jobs ====== | ||
+ | ====== Running and Controlling jobs ====== | ||
+ | |||
+ | |||
+ | sbatch < | ||
+ | sinteractive --nodelist=bnode0102 #run interactive job on node " | ||
scancel < | scancel < | ||
scancel -u < | scancel -u < | ||
Line 113: | Line 178: | ||
This only needs to be done once. | This only needs to be done once. | ||
- | Then launch your interactive job. | + | Then launch your interactive job on the IBG himem node. |
module load slurm/ | module load slurm/ | ||
+ | |||
+ | |||
+ | Or onto any free himem node | ||
+ | |||
+ | module load slurm/ | ||