From Beocat
Jump to: navigation, search
(27 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== Drinking from the Firehose ==
== Drinking from the Firehose ==
For a complete list of all installed modules, see [[ModuleList]]
For a complete list of all installed modules, run <tt>module avail</tt>
 
Alternatively, we update our [[ModuleList]] whenever we get a chance.


== Toolchains ==
== Toolchains ==
Line 7: Line 9:
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.


These toolchains include (you can run 'module keyword keychain compiler'):
These toolchains include (you can run 'module keyword toolchain'):
; GCC:    The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
; GCCcore:    The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
; gcccuda:    GNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.
; fosscuda:    GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support.
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. '''DEPRECATED'''
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
; gompic:    GNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. '''DEPRECATED'''
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
; icc:    C and C++ compiler from Intel
; iccifort:    Intel Cluster Toolkit Compiler Edition provides Intel C,C++ and fortran compilers, Intel MPI and Intel MKL
; ifort:    Fortran compiler from Intel
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL & OpenMPI.
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL & OpenMPI.
; iompi:    Intel C/C++ and Fortran compilers, alongside Open MPI.


You can run 'module spider $toolchain' to see the versions we have:
You can run 'module spider $toolchain/' to see the versions we have:
  $ module spider iomkl
  $ module spider iomkl/
* iomkl/2017a
* iomkl/2017a
* iomkl/2017b
* iomkl/2017b
Line 46: Line 41:


With software we provide, the toolchain used to compile is always specified in the "version" of the software that you want to load.
With software we provide, the toolchain used to compile is always specified in the "version" of the software that you want to load.
If you mix toolchains, inconsistent things may happen.
== Most Commonly Used Software ==
== Most Commonly Used Software ==
=== [http://www.open-mpi.org/ OpenMPI] ===
=== [http://www.open-mpi.org/ OpenMPI] ===
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module spider OpenMPI':
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module spider OpenMPI/':


* OpenMPI/2.0.2-GCC-6.3.0-2.27
* OpenMPI/2.0.2-GCC-6.3.0-2.27
Line 59: Line 56:


=== [http://www.r-project.org/ R] ===
=== [http://www.r-project.org/ R] ===
We currently provide (module -r spider '^R$'):
We currently provide (module spider R/):
* R/3.4.0-foss-2017beocatb-X11-20170314
* R/3.4.0-foss-2017beocatb-X11-20170314


Line 72: Line 69:
</syntaxhighlight>
</syntaxhighlight>
Then install the package using
Then install the package using
<syntaxhighlight lang="rsplus">
<syntaxhighlight lang="R">
install.packages("PACKAGENAME")
install.packages("PACKAGENAME")
</syntaxhighlight>
</syntaxhighlight>
Line 78: Line 75:


After installing you can test before leaving interactive mode by issuing the command
After installing you can test before leaving interactive mode by issuing the command
<syntaxhighlight lang="rsplus">
<syntaxhighlight lang="R">
library("PACKAGENAME")
library("PACKAGENAME")
</syntaxhighlight>
</syntaxhighlight>
==== Running R Jobs ====
==== Running R Jobs ====


You cannot submit an R script directly. '<tt>sbatch myscript.R</tt>' will result in an error. Instead, you need to make a bash [[AdvancedSGE#Running_from_a_qsub_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch
You cannot submit an R script directly. '<tt>sbatch myscript.R</tt>' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 102: Line 99:


=== [http://www.java.com/ Java] ===
=== [http://www.java.com/ Java] ===
We currently provide (module spider Java):
We currently provide (module spider Java/):
* Java/1.8.0_131
* Java/1.8.0_131
* Java/1.8.0_144
* Java/1.8.0_144


=== [http://www.python.org/about/ Python] ===
=== [http://www.python.org/about/ Python] ===
We currently provide (module spider Python)
We currently provide (module spider Python/)
* Python/2.7.13-foss-2017beocatb
* Python/2.7.13-foss-2017beocatb
* Python/2.7.13-GCCcore-7.2.0-bare
* Python/2.7.13-GCCcore-7.2.0-bare
Line 133: Line 130:
virtualenv test
virtualenv test
</syntaxhighlight>
</syntaxhighlight>
* Lets look at our virtual environments
* Lets look at our virtual environments (the virtual environment name should be in the output):
<pre>
<syntaxhighlight lang="bash">
% ls ~/virtualenvs
ls ~/virtualenvs
test
</syntaxhighlight>
</pre>
* Activate one of these
* Activate one of these
<pre>
<syntaxhighlight lang="bash">
%source ~/virtualenvs/test/bin/activate
source ~/virtualenvs/test/bin/activate
</pre>
</syntaxhighlight>
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)
* You can now install the python modules you want. This can be done using <tt>pip</tt>.
* You can now install the python modules you want. This can be done using <tt>pip</tt>.
Line 149: Line 145:


==== Using your virtual environment within a job ====
==== Using your virtual environment within a job ====
Here is a simple job script using the virtual environment testp2
Here is a simple job script using the virtual environment test
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
#!/bin/bash
#!/bin/bash
module load Python/3.6.3-iomkl-2017beocatb
module load Python/3.6.3-iomkl-2017beocatb
source ~/virtualenvs/test/bin/activate
source ~/virtualenvs/test/bin/activate
export PYTHONDONTWRITEBYTECODE=1
python ~/path/to/your/python/script.py
python ~/path/to/your/python/script.py
</syntaxhighlight>
</syntaxhighlight>
==== Using MPI with Python within a job ====
Here is a simple job script using MPI with Python
<syntaxhighlight lang="bash">
#!/bin/bash
module load Python/3.6.3-iomkl-2017beocatb
export PYTHONDONTWRITEBYTECODE=1
PYTHON_BINARY=$(which python)
mpirun ${PYTHON_BINARY} ~/path/to/your/mpi/python/script.py
</syntaxhighlight>
=== [http://spark.apache.org/ Spark] ===
Spark is a programming language for large scale data processing.
It can be used in conjunction with Python, R, Scala, Java, and SQL.
Spark can be run on Beocat interactively or through the Slurm queue.
To run interactively, you must first request a node or nodes from the Slurm queue.
The line below requests 1 node and 1 core for 24 hours and if available will drop
you into the bash shell on that node.
  srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash
We have some sample python based Spark code you can try out that came from the
exercises and homework from the PSC Spark workshop. 
  mkdir spark-test
  cd spark-test
  cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .
You will need to set up a python virtual environment and load the <B>nltk</B> package
before you run the first time.
  module load Python
  mkdir -p ~/virtualenvs
  cd ~/virtualenvs
  virtualenv spark-test
  source ~/virtualenvs/spark-test/bin/activate
  pip install nltk
  pip install numpy
  deactivate
To run the sample code interactively, load the Python and Spark modules,
source your python virtual environment, change to the sample directory, fire up pyspark,
then execute the sample code.
  module load Python
  source ~/virtualenvs/spark-test/bin/activate
  module load Spark
  cd ~/spark-test
  pyspark
  >>> exec(open("shakespeare.py").read())
You can work interactively from the pyspark prompt (>>>) in addition to running scripts as above.
The Shakespeare directory also contains a sample sbatch submit script that will run the
same shakespeare.py code through the Slurm batch queue. 
  #!/bin/bash -l
  #SBATCH --job-name=shakespeare
  #SBATCH --mem=10G
  #SBATCH --time=01:00:00
  #SBATCH --nodes=1
  #SBATCH --ntasks-per-node=1
 
  # Load Spark and Python (version 3 here)
  module load Spark
  module load Python
  source ~/virtualenvs/spark-test/bin/activate
 
  spark-submit shakespeare.py
When you run interactively, pyspark initializes your spark context <B>sc</B>.
You will need to do this manually as in the sample python code when you want
to submit jobs through the Slurm queue.
  # If there is no Spark Context (not running interactive from pyspark), create it
  try:
    sc
  except NameError:
    from pyspark import SparkConf, SparkContext
    conf = SparkConf().setMaster("local").setAppName("App")
    sc = SparkContext(conf = conf)


=== [http://www.perl.org/ Perl] ===
=== [http://www.perl.org/ Perl] ===
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.


If you need a newer version (or threads), just load one we provide in our modules (module spider Perl):
If you need a newer version (or threads), just load one we provide in our modules (module spider Perl/):
* Perl/5.26.0-foss-2017beocatb
* Perl/5.26.0-foss-2017beocatb
* Perl/5.26.0-iompi-2017beocatb
* Perl/5.26.0-iompi-2017beocatb
Line 185: Line 265:
everything that MatLab itself does.
everything that MatLab itself does.


#!/bin/bash -l
<syntaxhighlight lang="bash">
#SBATCH --job-name=octave
#!/bin/bash -l
#SBATCH --output=octave.o%j
#SBATCH --job-name=octave
#SBATCH --time=1:00:00
#SBATCH --output=octave.o%j
#SBATCH --mem=4G
#SBATCH --time=1:00:00
#SBATCH --nodes=1
#SBATCH --mem=4G
#SBATCH --ntasks-per-node=1
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
module purge
 
module load Octave/4.2.1-foss-2017beocatb-enable64
module purge
module load Octave/4.2.1-foss-2017beocatb-enable64
octave < matlab_code.m
 
octave < matlab_code.m
</syntaxhighlight>


=== MatLab compiler ===
=== MatLab compiler ===
Line 211: Line 293:
load the mcr module to run the resulting MatLab executable.
load the mcr module to run the resulting MatLab executable.


module load MATLAB<BR>
<syntaxhighlight lang="bash">
module load MATLAB
mcc -m matlab_main_code.m -o matlab_executable_name
mcc -m matlab_main_code.m -o matlab_executable_name
</syntaxhighlight>


  #!/bin/bash -l
If you have addpath() commands in your code, you will need to wrap them in an "if ~deployed" block and tell the
#SBATCH --job-name=matlab
compiler to include that path via the -I flag.
#SBATCH --output=matlab.o%j
 
#SBATCH --time=1:00:00
<syntaxhighlight lang="MATLAB">
#SBATCH --mem=4G
% wrap addpath() calls like so:
#SBATCH --nodes=1
if ~deployed
#SBATCH --ntasks-per-node=1
    addpath('./another/folder/with/code/')
end
module purge
</syntaxhighlight>
module load mcr
 
   
NOTE: The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code
  ./matlab_executable_name
you unfortunately may need to wait for up to 30 minutes to compile your own code.
 
Compiling with additional paths:
 
<syntaxhighlight lang="bash">
module load MATLAB
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name
</syntaxhighlight>
 
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You
can have multiple -I arguments in your compile command.
 
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:
 
<syntaxhighlight lang="bash">
#!/bin/bash -l
#SBATCH --job-name=matlab
#SBATCH --output=matlab.o%j
#SBATCH --time=1:00:00
#SBATCH --mem=4G
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
 
module purge
module load mcr
 
./matlab_executable_name
</syntaxhighlight>
 
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these
files to the compiled archive via the -a flag. See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.
 
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,
we recommend writing a script to preserve and help document the steps to compile your Matlab code. Here is an
abbreviated example from a current user:
 
<syntaxhighlight lang="bash">
#!/bin/bash -l
 
module load MATLAB
 
cd matlabPyrTools/MEX/
 
# compile mex files
mex upConv.c convolve.c wrap.c edges.c
mex corrDn.c convolve.c wrap.c edges.c
mex histo.c
mex innerProd.c
 
cd ../..
 
mcc -m mongrel_creation.m \
  -I ./matlabPyrTools/MEX/ \
  -I ./matlabPyrTools/ \
  -I ./FastICA/ \
  -a ./matlabPyrTools/MEX/ \
  -a ./texturesynth/ \
  -o mongrel_creation_binary
</syntaxhighlight>


Again, we only have a <B>single-user license</B> for MatLab so the model is to develop and debug your MatLab code
Again, we only have a <B>single-user license</B> for MatLab so the model is to develop and debug your MatLab code
Line 232: Line 374:


For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html
=== COMSOL ===
Beocat has no license for COMSOL. If you want to use it, you must provide your own.
module spider COMSOL/
----------------------------------------------------------------------------
  COMSOL: COMSOL/5.3
----------------------------------------------------------------------------
    Description:
      COMSOL Multiphysics software, an interactive environment for modeling
      and simulating scientific and engineering problems
    This module can be loaded directly: module load COMSOL/5.3
    Help:
     
      Description
      ===========
      COMSOL Multiphysics software, an interactive environment for modeling and
simulating scientific and engineering problems
      You must provide your own license.
      export LM_LICENSE_FILE=/the/path/to/your/license/file
      *OR*
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu
     
      More information
      ================
      - Homepage: https://www.comsol.com/
==== Graphical COMSOL ====
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:
<syntaxhighlight lang="bash">
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)
# load the comsol module on the headnode
module load COMSOL
# export your comsol license as mentioned above, and tell the scheduler to run the software
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw
</syntaxhighlight>
=== .NET Core ===
==== Load .NET ====
mozes@[eunomia] ~ $ module load dotNET-Core-SDK
==== create an application ====
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application
mozes@[eunomia] ~ $ mkdir Hello
mozes@[eunomia] ~ $ cd Hello
mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
mozes@[eunomia] ~/Hello $ dotnet new console
The template "Console Application" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...
  Restoring packages for /homes/mozes/Hello/Hello.csproj...
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.
Restore succeeded.
==== Edit your program ====
mozes@[eunomia] ~/Hello $ vi Program.cs
==== Run your .NET application ====
mozes@[eunomia] ~/Hello $ dotnet run
Hello World!
==== Build and run the built application ====
mozes@[eunomia] ~/Hello $ dotnet build
Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.
  Hello -> /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll
Build succeeded.
    0 Warning(s)
    0 Error(s)
Time Elapsed 00:00:02.86
mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll
Hello World!


== Installing my own software ==
== Installing my own software ==

Revision as of 16:53, 1 March 2019

Drinking from the Firehose

For a complete list of all installed modules, run module avail

Alternatively, we update our ModuleList whenever we get a chance.

Toolchains

A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.

We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.

These toolchains include (you can run 'module keyword toolchain'):

foss
GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
fosscuda
GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.
gmvapich2
GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. DEPRECATED
gompi
GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
goolfc
GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. DEPRECATED
iomkl
Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL & OpenMPI.

You can run 'module spider $toolchain/' to see the versions we have:

$ module spider iomkl/
  • iomkl/2017a
  • iomkl/2017b
  • iomkl/2017beocatb

If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':

$ module list
Currently Loaded Modules:
  1) icc/2017.4.196-GCC-6.4.0-2.28
  2) binutils/2.28-GCCcore-6.4.0
  3) ifort/2017.4.196-GCC-6.4.0-2.28
  4) iccifort/2017.4.196-GCC-6.4.0-2.28
  5) GCCcore/6.4.0
  6) numactl/2.0.11-GCCcore-6.4.0
  7) hwloc/1.11.7-GCCcore-6.4.0
  8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28
  9) iompi/2017b
 10) imkl/2017.3.196-iompi-2017b
 11) iomkl/2017b

As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.

With software we provide, the toolchain used to compile is always specified in the "version" of the software that you want to load.

If you mix toolchains, inconsistent things may happen.

Most Commonly Used Software

OpenMPI

We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module spider OpenMPI/':

  • OpenMPI/2.0.2-GCC-6.3.0-2.27
  • OpenMPI/2.0.2-iccifort-2017.1.132-GCC-6.3.0-2.27
  • OpenMPI/2.1.1-GCC-6.4.0-2.28
  • OpenMPI/2.1.1-GCC-7.2.0-2.29
  • OpenMPI/2.1.1-gcccuda-2017b
  • OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28
  • OpenMPI/2.1.1-iccifort-2018.0.128-GCC-7.2.0-2.29

R

We currently provide (module spider R/):

  • R/3.4.0-foss-2017beocatb-X11-20170314

Packages

We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.

Installing your own R Packages

To install your own module, login to Beocat and start R interactively

module load R
R

Then install the package using

install.packages("PACKAGENAME")

Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as "USA (KS)".

After installing you can test before leaving interactive mode by issuing the command

library("PACKAGENAME")

Running R Jobs

You cannot submit an R script directly. 'sbatch myscript.R' will result in an error. Instead, you need to make a bash script that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch

#!/bin/bash
#SBATCH --mem-per-cpu=1G
# Now we tell qsub how long we expect our work to take: 15 minutes (D-H:MM:SS)
#SBATCH --time=0-0:15:00

# Now lets do some actual work. This starts R and loads the file myscript.R
module load R
R --no-save -q < myscript.R

Now, to submit your R job, you would type

sbatch submit-R.sbatch

Java

We currently provide (module spider Java/):

  • Java/1.8.0_131
  • Java/1.8.0_144

Python

We currently provide (module spider Python/)

  • Python/2.7.13-foss-2017beocatb
  • Python/2.7.13-GCCcore-7.2.0-bare
  • Python/2.7.13-iomkl-2017a
  • Python/2.7.13-iomkl-2017beocatb
  • Python/3.6.3-foss-2017b
  • Python/3.6.3-foss-2017beocatb
  • Python/3.6.3-iomkl-2017beocatb

If you need modules that we do not have installed, you should use virtualenv to setup a virtual python environment in your home directory. This will let you install python modules as you please.

Setting up your virtual environment

# Load Python
module load Python/3.6.3-iomkl-2017beocatb

(After running this command Python is loaded. After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)

  • Create a location for your virtual environments (optional, but helps keep things organized)
mkdir ~/virtualenvs
cd ~/virtualenvs
  • Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that virtualenv --help has many more useful options.
virtualenv test
  • Lets look at our virtual environments (the virtual environment name should be in the output):
ls ~/virtualenvs
  • Activate one of these
source ~/virtualenvs/test/bin/activate

(After running this command your virtual environment is activated. After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)

  • You can now install the python modules you want. This can be done using pip.
pip install numpy biopython

Using your virtual environment within a job

Here is a simple job script using the virtual environment test

#!/bin/bash
module load Python/3.6.3-iomkl-2017beocatb
source ~/virtualenvs/test/bin/activate
export PYTHONDONTWRITEBYTECODE=1
python ~/path/to/your/python/script.py

Using MPI with Python within a job

Here is a simple job script using MPI with Python

#!/bin/bash
module load Python/3.6.3-iomkl-2017beocatb
export PYTHONDONTWRITEBYTECODE=1
PYTHON_BINARY=$(which python)
mpirun ${PYTHON_BINARY} ~/path/to/your/mpi/python/script.py

Spark

Spark is a programming language for large scale data processing. It can be used in conjunction with Python, R, Scala, Java, and SQL. Spark can be run on Beocat interactively or through the Slurm queue.

To run interactively, you must first request a node or nodes from the Slurm queue. The line below requests 1 node and 1 core for 24 hours and if available will drop you into the bash shell on that node.

 srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash

We have some sample python based Spark code you can try out that came from the exercises and homework from the PSC Spark workshop.

 mkdir spark-test
 cd spark-test
 cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .

You will need to set up a python virtual environment and load the nltk package before you run the first time.

 module load Python
 mkdir -p ~/virtualenvs
 cd ~/virtualenvs
 virtualenv spark-test
 source ~/virtualenvs/spark-test/bin/activate
 pip install nltk
 pip install numpy
 deactivate

To run the sample code interactively, load the Python and Spark modules, source your python virtual environment, change to the sample directory, fire up pyspark, then execute the sample code.

 module load Python
 source ~/virtualenvs/spark-test/bin/activate
 module load Spark
 cd ~/spark-test
 pyspark
 >>> exec(open("shakespeare.py").read())

You can work interactively from the pyspark prompt (>>>) in addition to running scripts as above.

The Shakespeare directory also contains a sample sbatch submit script that will run the same shakespeare.py code through the Slurm batch queue.

 #!/bin/bash -l
 #SBATCH --job-name=shakespeare
 #SBATCH --mem=10G
 #SBATCH --time=01:00:00
 #SBATCH --nodes=1
 #SBATCH --ntasks-per-node=1
 
 # Load Spark and Python (version 3 here)
 module load Spark
 module load Python
 source ~/virtualenvs/spark-test/bin/activate
 
 spark-submit shakespeare.py

When you run interactively, pyspark initializes your spark context sc. You will need to do this manually as in the sample python code when you want to submit jobs through the Slurm queue.

 # If there is no Spark Context (not running interactive from pyspark), create it
 try:
    sc
 except NameError:
    from pyspark import SparkConf, SparkContext
    conf = SparkConf().setMaster("local").setAppName("App")
    sc = SparkContext(conf = conf)

Perl

The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.

If you need a newer version (or threads), just load one we provide in our modules (module spider Perl/):

  • Perl/5.26.0-foss-2017beocatb
  • Perl/5.26.0-iompi-2017beocatb

Submitting a job with Perl

Much like R (above), you cannot simply 'sbatch myProgram.pl', but you must create a submit script which will call perl. Here is an example:

#!/bin/bash
#SBATCH --mem-per-cpu=1G
# Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)
#SBATCH --time=0-0:15:00
# Now lets do some actual work. 
module load Perl
perl /path/to/myProgram.pl

Octave for MatLab codes

module load Octave/4.2.1-foss-2017beocatb-enable64

The 64-bit version of Octave can be loaded using the command above. Octave can then be used to work with MatLab codes on the head node and to submit jobs to the compute nodes through the sbatch scheduler. Octave is made to run MatLab code, but it does have limitations and does not support everything that MatLab itself does.

#!/bin/bash -l
#SBATCH --job-name=octave
#SBATCH --output=octave.o%j
#SBATCH --time=1:00:00
#SBATCH --mem=4G
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1

module purge
module load Octave/4.2.1-foss-2017beocatb-enable64

octave < matlab_code.m

MatLab compiler

Beocat also has a single-user license for the MatLab compiler and the most common toolboxes including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox, Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Sumbolic Math Toolbox, Global Optimization Toolbox, and the Bioinformatics Toolbox.

Since we only have a single-user license, this means that you will be expected to develop your MatLab code with Octave or elsewhere on a laptop or departmental server. Once you're ready to do large runs, then you move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as you want to the scheduler. To use the MatLab compiler, you need to load the MATLAB module to compile code and load the mcr module to run the resulting MatLab executable.

module load MATLAB
mcc -m matlab_main_code.m -o matlab_executable_name

If you have addpath() commands in your code, you will need to wrap them in an "if ~deployed" block and tell the compiler to include that path via the -I flag.

% wrap addpath() calls like so:
if ~deployed
    addpath('./another/folder/with/code/')
end

NOTE: The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code you unfortunately may need to wait for up to 30 minutes to compile your own code.

Compiling with additional paths:

module load MATLAB
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name

Any directories added with addpath() will need to be added to the list of compile options as -I arguments. You can have multiple -I arguments in your compile command.

Here is an example job submission script. Modify time, memory, tasks-per-node, and job name as you see fit:

#!/bin/bash -l
#SBATCH --job-name=matlab
#SBATCH --output=matlab.o%j
#SBATCH --time=1:00:00
#SBATCH --mem=4G
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1

module purge
module load mcr

./matlab_executable_name

For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these files to the compiled archive via the -a flag. See the behavior of this flag in the compiler documentation. You can either target specific .mex files or entire directories.

Because codes often require adding several directories to the Matlab path as well as mex files from several locations, we recommend writing a script to preserve and help document the steps to compile your Matlab code. Here is an abbreviated example from a current user:

#!/bin/bash -l

module load MATLAB

cd matlabPyrTools/MEX/

# compile mex files
mex upConv.c convolve.c wrap.c edges.c
mex corrDn.c convolve.c wrap.c edges.c
mex histo.c
mex innerProd.c

cd ../..

mcc -m mongrel_creation.m \
  -I ./matlabPyrTools/MEX/ \
  -I ./matlabPyrTools/ \
  -I ./FastICA/ \
  -a ./matlabPyrTools/MEX/ \
  -a ./texturesynth/ \
  -o mongrel_creation_binary

Again, we only have a single-user license for MatLab so the model is to develop and debug your MatLab code elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without limits on Beocat.

For more info on the mcc compiler see: https://www.mathworks.com/help/compiler/mcc.html

COMSOL

Beocat has no license for COMSOL. If you want to use it, you must provide your own.

module spider COMSOL/
----------------------------------------------------------------------------
 COMSOL: COMSOL/5.3
----------------------------------------------------------------------------
   Description:
     COMSOL Multiphysics software, an interactive environment for modeling
     and simulating scientific and engineering problems

   This module can be loaded directly: module load COMSOL/5.3

   Help:
     
     Description
     ===========
     COMSOL Multiphysics software, an interactive environment for modeling and 
simulating scientific and engineering problems
     You must provide your own license.
     export LM_LICENSE_FILE=/the/path/to/your/license/file
     *OR*
     export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME
     e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu
     
     More information
     ================
      - Homepage: https://www.comsol.com/

Graphical COMSOL

Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:

# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)
# load the comsol module on the headnode
module load COMSOL
# export your comsol license as mentioned above, and tell the scheduler to run the software
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw

.NET Core

Load .NET

mozes@[eunomia] ~ $ module load dotNET-Core-SDK

create an application

Following instructions from here, we'll create a simple 'Hello World' application

mozes@[eunomia] ~ $ mkdir Hello
mozes@[eunomia] ~ $ cd Hello
mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
mozes@[eunomia] ~/Hello $ dotnet new console
The template "Console Application" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...
 Restoring packages for /homes/mozes/Hello/Hello.csproj...
 Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.
 Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.
 Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.

Restore succeeded.

Edit your program

mozes@[eunomia] ~/Hello $ vi Program.cs

Run your .NET application

mozes@[eunomia] ~/Hello $ dotnet run
Hello World!

Build and run the built application

mozes@[eunomia] ~/Hello $ dotnet build
Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

 Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.
 Hello -> /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll

Build succeeded.
   0 Warning(s)
   0 Error(s)

Time Elapsed 00:00:02.86
mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll
Hello World!

Installing my own software

Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.

In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.

As a quick example of installing software in your home directory, we have a sample video on our Training Videos page. If you're still having problems or questions, please contact support as mentioned on our Main Page.