Module Availability
For a complete list of all installed modules, check our modules website
We try to keep the same software available across all node classes, but sometimes it is impossible. Please check the modules website if you have any questions about software availability on each node class.
Toolchains
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.
These toolchains include (you can run 'module keyword toolchain'):
- foss
- GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
- fosscuda
- GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.
- gmvapich2
- GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. DEPRECATED
- gompi
- GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
- goolfc
- GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. DEPRECATED
- iomkl
- Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL & OpenMPI.
You can run 'module spider $toolchain/' to see the versions we have:
$ module spider iomkl/
- iomkl/2017a
- iomkl/2017b
- iomkl/2017beocatb
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':
$ module list Currently Loaded Modules: 1) icc/2017.4.196-GCC-6.4.0-2.28 2) binutils/2.28-GCCcore-6.4.0 3) ifort/2017.4.196-GCC-6.4.0-2.28 4) iccifort/2017.4.196-GCC-6.4.0-2.28 5) GCCcore/6.4.0 6) numactl/2.0.11-GCCcore-6.4.0 7) hwloc/1.11.7-GCCcore-6.4.0 8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28 9) iompi/2017b 10) imkl/2017.3.196-iompi-2017b 11) iomkl/2017b
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.
With software we provide, the toolchain used to compile is always specified in the "version" of the software that you want to load.
If you mix toolchains, inconsistent things may happen.
Most Commonly Used Software
Check our modules website for the most up to date software availability.
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.
OpenMPI
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'
The first step to run an MPI application is to load one of the OpenMPI modules. You normally will just need to load the default version as below. If your code needs access to nVidia GPUs you'll need the cuda version above. Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.
module load OpenMPI
If you are working with your own MPI code you will need to start by compiling it. MPI offers mpicc for compiling codes written in C, mpic++ for compiling C++ code, and mpifort for compiling Fortran code. You can get a complete listing of parameters to use by running them with the --help parameter. Below are some examples of compiling with each.
mpicc --help mpicc -o my_code.x my_code.c mpic++ -o my_code.x my_code.cc mpifort -o my_code.x my_code.f
In each case above, you can name the executable file whatever you want (I chose <T>my_code.x). It is common to use different optimization levels, for example, but those may depend on the version of OpenMPI you choose. Some are based on the Intel compilers so you'd need to use optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.
We have many MPI codes in our modules that you simply need to load before using. Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.
module load GROMACS
This loads the Gromacs modules and sets all the paths so you can run the scalar version gmx or the MPI version gmx_mpi. Below is a sample job script for running a complete Gromacs simulation.
#!/bin/bash -l #SBATCH --mem=120G #SBATCH --time=24:00:00 #SBATCH --job-name=gromacs #SBATCH --nodes=1 #SBATCH --ntasks-per-node=4 module purge module load GROMACS echo "Running Gromacs on $HOSTNAME" export OMP_NUM_THREADS=1 time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0 echo "Finished run on $SLURM_NTASKS $HOSTNAME cores"
mpirun will run your job on all cores requested which in this case is 4 cores on a single node. You will often just need to guess at the memory size for your code, then check on the memory usage with kstat --me and adjust the memory in future jobs.
I prefer to put a module purge in my scripts then manually load the modules needed to insure each run is using the modules it needs. If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.
I also like to put a time command in front of each part of the script that can use significant amounts of time. This way I can track the amount of time used in each section of the job script. This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread. There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.
Once you have your job script ready, submit it using the sbatch command as below where the job script is in the file sb.gromacs.
sbatch sb.gromacs
You should then monitor your job as it goes through the queue and starts running using kstat --me. You code will also generate an output file, usually of the form slurm-#######.out where the 7 # signs are the 7 digit job ID number. If you need to cancel your job use scancel with the 7 digit job ID number.
scancel #######
R
You can see what versions of R we provide with 'module avail R/'
Packages
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.
Installing your own R Packages
To install your own module, login to Beocat and start R interactively
module load R
R
Then install the package using
install.packages("PACKAGENAME")
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as "USA (KS)".
After installing you can test before leaving interactive mode by issuing the command
library("PACKAGENAME")
Running R Jobs
You cannot submit an R script directly. 'sbatch myscript.R' will result in an error. Instead, you need to make a bash script that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch
#!/bin/bash -l
#SBATCH --mem-per-cpu=4G
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)
#SBATCH --time=0-00:15:00
# Now lets do some actual work. This starts R and loads the file myscript.R
module purge
module load R
R --no-save -q < myscript.R
Now, to submit your R job, you would type
sbatch submit-R.sbatch
You can monitor your jobs using kstat --me. The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.
Java
You can see what versions of Java we support with 'module avail Java/'
Python
You can see what versions of Python we support with 'module avail Python/'
If you need libraries that we do not have installed, you should use virtualenv to setup a virtual python environment in your home directory. This will let you install python libraries as you please.
Setting up your virtual environment
# Load Python
module load Python/3.7.0-iomkl-2018b
(After running this command Python is loaded. After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)
- Create a location for your virtual environments (optional, but helps keep things organized)
mkdir ~/virtualenvs
cd ~/virtualenvs
- Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that
virtualenv --help
has many more useful options.
virtualenv test
- Lets look at our virtual environments (the virtual environment name should be in the output):
ls ~/virtualenvs
- Activate one of these
source ~/virtualenvs/test/bin/activate
(After running this command your virtual environment is activated. After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)
- You can now install the python modules you want. This can be done using pip.
pip install numpy biopython
Using your virtual environment within a job
Here is a simple job script using the virtual environment test
#!/bin/bash
module load Python/3.7.0-iomkl-2018b
source ~/virtualenvs/test/bin/activate
export PYTHONDONTWRITEBYTECODE=1
python ~/path/to/your/python/script.py
Using MPI with Python within a job
Here is a simple job script using MPI with Python
#!/bin/bash
module load Python/3.6.3-iomkl-2017beocatb
export PYTHONDONTWRITEBYTECODE=1
PYTHON_BINARY=$(which python)
mpirun ${PYTHON_BINARY} ~/path/to/your/mpi/python/script.py
Spark
Spark is a programming language for large scale data processing. It can be used in conjunction with Python, R, Scala, Java, and SQL. Spark can be run on Beocat interactively or through the Slurm queue.
To run interactively, you must first request a node or nodes from the Slurm queue. The line below requests 1 node and 1 core for 24 hours and if available will drop you into the bash shell on that node.
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash
We have some sample python based Spark code you can try out that came from the exercises and homework from the PSC Spark workshop.
mkdir spark-test cd spark-test cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .
You will need to set up a python virtual environment and load the nltk package before you run the first time.
module load Python mkdir -p ~/virtualenvs cd ~/virtualenvs virtualenv spark-test source ~/virtualenvs/spark-test/bin/activate pip install nltk pip install numpy deactivate
To run the sample code interactively, load the Python and Spark modules, source your python virtual environment, change to the sample directory, fire up pyspark, then execute the sample code.
module load Python source ~/virtualenvs/spark-test/bin/activate module load Spark cd ~/spark-test pyspark >>> exec(open("shakespeare.py").read())
You can work interactively from the pyspark prompt (>>>) in addition to running scripts as above.
The Shakespeare directory also contains a sample sbatch submit script that will run the same shakespeare.py code through the Slurm batch queue.
#!/bin/bash -l #SBATCH --job-name=shakespeare #SBATCH --mem=10G #SBATCH --time=01:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 # Load Spark and Python (version 3 here) module load Spark module load Python source ~/virtualenvs/spark-test/bin/activate spark-submit shakespeare.py
When you run interactively, pyspark initializes your spark context sc. You will need to do this manually as in the sample python code when you want to submit jobs through the Slurm queue.
# If there is no Spark Context (not running interactive from pyspark), create it try: sc except NameError: from pyspark import SparkConf, SparkContext conf = SparkConf().setMaster("local").setAppName("App") sc = SparkContext(conf = conf)
Perl
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'
Submitting a job with Perl
Much like R (above), you cannot simply 'sbatch myProgram.pl', but you must create a submit script which will call perl. Here is an example:
#!/bin/bash
#SBATCH --mem-per-cpu=1G
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)
#SBATCH --time=0-0:15:00
# Now lets do some actual work.
module load Perl
perl /path/to/myProgram.pl
Octave for MatLab codes
'module avail Octave/'
The 64-bit version of Octave can be loaded using the command above. Octave can then be used to work with MatLab codes on the head node and to submit jobs to the compute nodes through the sbatch scheduler. Octave is made to run MatLab code, but it does have limitations and does not support everything that MatLab itself does.
#!/bin/bash -l
#SBATCH --job-name=octave
#SBATCH --output=octave.o%j
#SBATCH --time=1:00:00
#SBATCH --mem=4G
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
module purge
module load Octave/4.2.1-foss-2017beocatb-enable64
octave < matlab_code.m
MatLab compiler
Beocat also has a single-user license for the MatLab compiler and the most common toolboxes including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox, Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Sumbolic Math Toolbox, Global Optimization Toolbox, and the Bioinformatics Toolbox.
Since we only have a single-user license, this means that you will be expected to develop your MatLab code with Octave or elsewhere on a laptop or departmental server. Once you're ready to do large runs, then you move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as you want to the scheduler. To use the MatLab compiler, you need to load the MATLAB module to compile code and load the mcr module to run the resulting MatLab executable.
module load MATLAB
mcc -m matlab_main_code.m -o matlab_executable_name
If you have addpath() commands in your code, you will need to wrap them in an "if ~deployed" block and tell the compiler to include that path via the -I flag.
% wrap addpath() calls like so:
if ~deployed
addpath('./another/folder/with/code/')
end
NOTE: The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code you unfortunately may need to wait for up to 30 minutes to compile your own code.
Compiling with additional paths:
module load MATLAB
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name
Any directories added with addpath() will need to be added to the list of compile options as -I arguments. You can have multiple -I arguments in your compile command.
Here is an example job submission script. Modify time, memory, tasks-per-node, and job name as you see fit:
#!/bin/bash -l
#SBATCH --job-name=matlab
#SBATCH --output=matlab.o%j
#SBATCH --time=1:00:00
#SBATCH --mem=4G
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
module purge
module load mcr
./matlab_executable_name
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these files to the compiled archive via the -a flag. See the behavior of this flag in the compiler documentation. You can either target specific .mex files or entire directories.
Because codes often require adding several directories to the Matlab path as well as mex files from several locations, we recommend writing a script to preserve and help document the steps to compile your Matlab code. Here is an abbreviated example from a current user:
#!/bin/bash -l
module load MATLAB
cd matlabPyrTools/MEX/
# compile mex files
mex upConv.c convolve.c wrap.c edges.c
mex corrDn.c convolve.c wrap.c edges.c
mex histo.c
mex innerProd.c
cd ../..
mcc -m mongrel_creation.m \
-I ./matlabPyrTools/MEX/ \
-I ./matlabPyrTools/ \
-I ./FastICA/ \
-a ./matlabPyrTools/MEX/ \
-a ./texturesynth/ \
-o mongrel_creation_binary
Again, we only have a single-user license for MatLab so the model is to develop and debug your MatLab code elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without limits on Beocat.
For more info on the mcc compiler see: https://www.mathworks.com/help/compiler/mcc.html
COMSOL
Beocat has no license for COMSOL. If you want to use it, you must provide your own.
module spider COMSOL/ ---------------------------------------------------------------------------- COMSOL: COMSOL/5.3 ---------------------------------------------------------------------------- Description: COMSOL Multiphysics software, an interactive environment for modeling and simulating scientific and engineering problems This module can be loaded directly: module load COMSOL/5.3 Help: Description =========== COMSOL Multiphysics software, an interactive environment for modeling and simulating scientific and engineering problems You must provide your own license. export LM_LICENSE_FILE=/the/path/to/your/license/file *OR* export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu More information ================ - Homepage: https://www.comsol.com/
Graphical COMSOL
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)
# load the comsol module on the headnode
module load COMSOL
# export your comsol license as mentioned above, and tell the scheduler to run the software
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw
.NET Core
Load .NET
mozes@[eunomia] ~ $ module load dotNET-Core-SDK
create an application
Following instructions from here, we'll create a simple 'Hello World' application
mozes@[eunomia] ~ $ mkdir Hello
mozes@[eunomia] ~ $ cd Hello
mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
mozes@[eunomia] ~/Hello $ dotnet new console The template "Console Application" was created successfully. Processing post-creation actions... Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj... Restoring packages for /homes/mozes/Hello/Hello.csproj... Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props. Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets. Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj. Restore succeeded.
Edit your program
mozes@[eunomia] ~/Hello $ vi Program.cs
Run your .NET application
mozes@[eunomia] ~/Hello $ dotnet run Hello World!
Build and run the built application
mozes@[eunomia] ~/Hello $ dotnet build Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj. Hello -> /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:02.86
mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll Hello World!
Installing my own software
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.
As a quick example of installing software in your home directory, we have a sample video on our Training Videos page. If you're still having problems or questions, please contact support as mentioned on our Main Page.