From Beocat
Revision as of 12:40, 31 August 2020 by Mozes (talk | contribs) (Created page with "=== Converting your qsub script for sbatch using <I>kstat.convert</I> === If you already have a qsub script, I have created a new perl program called kstat.convert that will...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Converting your qsub script for sbatch using kstat.convert

If you already have a qsub script, I have created a new perl program called kstat.convert that will automatically convert your qsub script over to an sbatch script.

kstat.convert --sge qsub_script.sh --slurm slurm_script.sh

Below is an example of a simple qsub script and the resulting sbatch script after conversion.

#!/bin/bash
#$ -j y
#$ -cwd
#$ -N netpipe
#$ -P KSU-CIS-HPC

#$ -l mem=4G
#$ -l h_rt=100:00:00
#$ -pe single 32

#$ -M eid@ksu.edu
#$ -m ab

mpirun -np $NSLOTS NPmpi -o np.out
#!/bin/bash -l
#SBATCH --job-name=netpipe

#SBATCH --mem-per-cpu=4G   # Memory per core, use --mem= for memory per node
#SBATCH --time=4-04:00:00   # Use the form DD-HH:MM:SS
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32

#SBATCH --mail-user=eid@ksu.edu
#SBATCH --mail-type=ALL   # same as =BEGIN,FAIL,END

mpirun -np $SLURM_NPROCS NPmpi -o np.out

The sbatch file uses #SBATCH to identify command options for the scheduler where the qsub file uses #$. Most options are similar but simply use a different syntax. The memory can still be defined on a per core basis as with SGE, or you can use --mem=128G to specify the total memory per node if you'd prefer. The --nodes= and --ntasks-per-node= provide an easy way to request the core configuration you want. If your code can be distributed across multiple nodes and you don't care what the arrangement is, you can instead just specify the number of cores using --ntasks=. For more in depth documentation on converting from SGE to Slurm follow the links below:

https://srcc.stanford.edu/sge-slurm-conversion
https://slurm.schedmd.com/sbatch.html