From ccdcb43e87e69d31da0093fb04b1a79ba8555c4a Mon Sep 17 00:00:00 2001 From: Camden Duy Date: Thu, 12 Sep 2024 16:24:49 -0400 Subject: [PATCH] update partition table for slurm workshop --- content/notes/slurm-from-cli/section1.md | 9 ++++----- content/notes/slurm-from-cli/section4.md | 2 +- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/content/notes/slurm-from-cli/section1.md b/content/notes/slurm-from-cli/section1.md index d125654f..48973807 100644 --- a/content/notes/slurm-from-cli/section1.md +++ b/content/notes/slurm-from-cli/section1.md @@ -49,11 +49,10 @@ SLURM refers to queues as __partitions__ . We do not have a default partition; {{< table >}} | Queue Name | Purpose | Job Time Limit | Max Memory / Node / Job | Max Cores / Node | | :-: | :-: | :-: | :-: | :-: | -| standard | For jobs on a single compute node | 7 days | 375 GB | 37 | -| gpu | For jobs that can use general purpose GPU’s
(A40,A100,A6000,V100,RTX3090) | 3 days | 1953 GB | 125 | -| parallel | For large parallel jobs on up to 50 nodes (<= 1500 CPU cores) | 3 days | 375 GB | 40
| -| largemem | For memory intensive jobs | 4 days | 768 GB
1 TB | 45 | -| interactive | For quick interactive sessions (up to two RTX2080 GPUs) | 12 hours | 216 GB | 37 | +| standard | For jobs on a single compute node | 7 days | 375 GB | 96 | +| gpu | For jobs that can use general purpose GPU’s
(A40,A100,A6000,V100,RTX3090) | 3 days | 1953 GB | 128 | +| parallel | For large parallel jobs on up to 50 nodes (<= 1500 CPU cores) | 3 days | 375 GB | 96
| +| interactive | For quick interactive sessions (up to two RTX2080 GPUs) | 12 hours | 216 GB | 96 | {{< /table >}} To see an online list of available partitions, from a command line type diff --git a/content/notes/slurm-from-cli/section4.md b/content/notes/slurm-from-cli/section4.md index 255904d3..6dae179c 100644 --- a/content/notes/slurm-from-cli/section4.md +++ b/content/notes/slurm-from-cli/section4.md @@ -131,7 +131,7 @@ $sbatch --array=0-30 ``` In your Slurm script you would use a command such as ```bash -python myscript.py myinput.${SLURM_ARRAY_TASK_ID}.in} +python myscript.py myinput.${SLURM_ARRAY_TASK_ID}.in ``` The script should be prepared to request resources for _one_ instance of your program.